Programming the Navier-Stokes computer: An abstract machine model and a visual editor
NASA Technical Reports Server (NTRS)
Middleton, David; Crockett, Tom; Tomboulian, Sherry
1988-01-01
The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.
Automatic Review of Abstract State Machines by Meta Property Verification
NASA Technical Reports Server (NTRS)
Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia
2010-01-01
A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.
Teaching for Abstraction: A Model
ERIC Educational Resources Information Center
White, Paul; Mitchelmore, Michael C.
2010-01-01
This article outlines a theoretical model for teaching elementary mathematical concepts that we have developed over the past 10 years. We begin with general ideas about the abstraction process and differentiate between "abstract-general" and "abstract-apart" concepts. A 4-phase model of teaching, called Teaching for Abstraction, is then proposed…
Multimodeling and Model Abstraction
Technology Transfer Automated Retrieval System (TEKTRAN)
The multiplicity of models of the same process or phenomenon is the commonplace in environmental modeling. Last 10 years brought marked interest to making use of the variety of conceptual approaches instead of attempting to find the best model or using a single preferred model. Two systematic approa...
Formal modeling of virtual machines
NASA Technical Reports Server (NTRS)
Cremers, A. B.; Hibbard, T. N.
1978-01-01
Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.
Modelling Metamorphism by Abstract Interpretation
NASA Astrophysics Data System (ADS)
Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.
Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.
NASA Astrophysics Data System (ADS)
Biernacka, Małgorzata; Danvy, Olivier
We present a context-sensitive reduction semantics for a lambda-calculus with explicit substitutions and we show that the functional implementation of this small-step semantics mechanically corresponds to that of the abstract machine for Core Scheme presented by Clinger at PLDI’98, including first-class continuations. Starting from this reduction semantics, (1) we refocus it into a small-step abstract machine; (2) we fuse the transition function of this abstract machine with its driver loop, obtaining a big-step abstract machine which is staged; (3) we compress its corridor transitions, obtaining an eval/continue abstract machine; and (4) we unfold its ground closures, which yields an abstract machine that essentially coincides with Clinger’s machine. This lambda-calculus with explicit substitutions therefore aptly accounts for Core Scheme, including Clinger’s permutations and unpermutations.
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Integrating model abstraction into monitoring strategies
Technology Transfer Automated Retrieval System (TEKTRAN)
This study was designed and performed to investigate the opportunities and benefits of integrating model abstraction techniques into monitoring strategies. The study focused on future applications of modeling to contingency planning and management of potential and actual contaminant release sites wi...
NASA Astrophysics Data System (ADS)
2012-09-01
Measuring cosmological parameters with GRBs: status and perspectives New interpretation of the Amati relation The SED Machine - a dedicated transient spectrograph PTF10iue - evidence for an internal engine in a unique Type Ic SN Direct evidence for the collapsar model of long gamma-ray bursts On pair instability supernovae and gamma-ray bursts Pan-STARRS1 observations of ultraluminous SNe The influence of rotation on the critical neutrino luminosity in core-collapse supernovae General relativistic magnetospheres of slowly rotating and oscillating neutron stars Host galaxies of short GRBs GRB 100418A: a bridge between GRB-associated hypernovae and SNe Two super-luminous SNe at z ~ 1.5 from the SNLS Prospects for very-high-energy gamma-ray bursts with the Cherenkov Telescope Array The dynamics and radiation of relativistic flows from massive stars The search for light echoes from the supernova explosion of 1181 AD The proto-magnetar model for gamma-ray bursts Stellar black holes at the dawn of the universe MAXI J0158-744: the discovery of a supersoft X-ray transient Wide-band spectra of magnetar burst emission Dust formation and evolution in envelope-stripped core-collapse supernovae The host galaxies of dark gamma-ray bursts Keck observations of 150 GRB host galaxies Search for properties of GRBs at large redshift The early emission from SNe Spectral properties of SN shock breakout MAXI observation of GRBs and short X-ray transients A three-dimensional view of SN 1987A using light echo spectroscopy X-ray study of the southern extension of the SNR Puppis A All-sky survey of short X-ray transients by MAXI GSC Development of the CALET gamma-ray burst monitor (CGBM)
SATURATED ZONE FLOW AND TRANSPORT MODEL ABSTRACTION
B.W. ARNOLD
2004-10-27
The purpose of the saturated zone (SZ) flow and transport model abstraction task is to provide radionuclide-transport simulation results for use in the total system performance assessment (TSPA) for license application (LA) calculations. This task includes assessment of uncertainty in parameters that pertain to both groundwater flow and radionuclide transport in the models used for this purpose. This model report documents the following: (1) The SZ transport abstraction model, which consists of a set of radionuclide breakthrough curves at the accessible environment for use in the TSPA-LA simulations of radionuclide releases into the biosphere. These radionuclide breakthrough curves contain information on radionuclide-transport times through the SZ. (2) The SZ one-dimensional (I-D) transport model, which is incorporated in the TSPA-LA model to simulate the transport, decay, and ingrowth of radionuclide decay chains in the SZ. (3) The analysis of uncertainty in groundwater-flow and radionuclide-transport input parameters for the SZ transport abstraction model and the SZ 1-D transport model. (4) The analysis of the background concentration of alpha-emitting species in the groundwater of the SZ.
Directory of Energy Information Administration Model Abstracts
Not Available
1986-07-16
This directory partially fulfills the requirements of Section 8c, of the documentation order, which states in part that: The Office of Statistical Standards will annually publish an EIA document based on the collected abstracts and the appendices. This report contains brief statements about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models active through March 1985 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). EIA also leases models developed by proprietary software vendors. Documentation for these proprietary models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.
Directory of Energy Information Administration model abstracts
Not Available
1987-08-11
This report contains brief statements from the model managers about each model's title, acronym, purpose, and status, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. All models ''active'' through March 1987 are included. The main body of this directory is an alphabetical list of all active EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies active EIA models by type (basic, auxiliary, and developing). A basic model is one designated by the EIA Administrator as being sufficiently important to require sustained support and public scrutiny. An auxiliary model is one designated by the EIA Administrator as being used only occasionally in analyses, and therefore requires minimal levels of documentation. A developing model is one designated by the EIA Administrator as being under development and yet of sufficient interest to require a basic level of documentation at a future date. EIA also leases models developed by proprietary software vendors. Documentation for these ''proprietary'' models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here. The directory is intended for the use of energy and energy-policy analysts in the public and private sectors.
Model Checking Abstract PLEXIL Programs with SMART
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.
2007-01-01
We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.
Evolutionary model with Turing machines
NASA Astrophysics Data System (ADS)
Feverati, Giovanni; Musso, Fabio
2008-06-01
The development of a large noncoding fraction in eukaryotic DNA and the phenomenon of the code bloat in the field of evolutionary computations show a striking similarity. This seems to suggest that (in the presence of mechanisms of code growth) the evolution of a complex code cannot be attained without maintaining a large inactive fraction. To test this hypothesis we performed computer simulations of an evolutionary toy model for Turing machines, studying the relations among fitness and coding versus noncoding ratio while varying mutation and code growth rates. The results suggest that, in our model, having a large reservoir of noncoding states constitutes a great (long term) evolutionary advantage.
Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.
2015-01-01
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202
Kaplan, Jonas T; Man, Kingson; Greening, Steven G
2015-01-01
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202
Machine learning in sedimentation modelling.
Bhattacharya, B; Solomatine, D P
2006-03-01
The paper presents machine learning (ML) models that predict sedimentation in the harbour basin of the Port of Rotterdam. The important factors affecting the sedimentation process such as waves, wind, tides, surge, river discharge, etc. are studied, the corresponding time series data is analysed, missing values are estimated and the most important variables behind the process are chosen as the inputs. Two ML methods are used: MLP ANN and M5 model tree. The latter is a collection of piece-wise linear regression models, each being an expert for a particular region of the input space. The models are trained on the data collected during 1992-1998 and tested by the data of 1999-2000. The predictive accuracy of the models is found to be adequate for the potential use in the operational decision making. PMID:16530383
Memristor models for machine learning.
Carbajal, Juan Pablo; Dambre, Joni; Hermans, Michiel; Schrauwen, Benjamin
2015-03-01
In the quest for alternatives to traditional complementary metal-oxide-semiconductor, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model, and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that although both models could lead to useful memristor-based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models. PMID:25602769
Musical Instruments, Models, and Machines.
NASA Astrophysics Data System (ADS)
Gershenfeld, Neil
1996-11-01
A traditional musical instrument is an analog computer that integrates equations of motion based on applied boundary conditions. We are approaching a remarkable time when advances in transducers, real-time computing, and mathematical modeling will enable new technology to emulate and generalize the physics of great musical instruments from first principles, helping virtuosic musicians to do more and non-musicians to engage in creative expression. I will discuss the underlying problems, including non-contact sensing and state reconstruction for nonlinear systems, describe exploratory performance collaborations with artists ranging from Yo-Yo Ma to Penn & Teller, and then consider the broader implications of these devices for the interaction between people and machines. Part B of program listing
Rough set models of Physarum machines
NASA Astrophysics Data System (ADS)
Pancerz, Krzysztof; Schumann, Andrew
2015-04-01
In this paper, we consider transition system models of behaviour of Physarum machines in terms of rough set theory. A Physarum machine, a biological computing device implemented in the plasmodium of Physarum polycephalum (true slime mould), is a natural transition system. In the behaviour of Physarum machines, one can notice some ambiguity in Physarum motions that influences exact anticipation of states of machines in time. To model this ambiguity, we propose to use rough set models created over transition systems. Rough sets are an appropriate tool to deal with rough (ambiguous, imprecise) concepts in the universe of discourse.
Abstract models for the synthesis of optimization algorithms.
NASA Technical Reports Server (NTRS)
Meyer, G. G. L.; Polak, E.
1971-01-01
Systematic approach to the problem of synthesis of optimization algorithms. Abstract models for algorithms are developed which guide the inventive process toward ?conceptual' algorithms which may consist of operations that are inadmissible in a practical method. Once the abstract models are established a set of methods for converting ?conceptual' algorithms falling into the class defined by the abstract models into ?implementable' iterative procedures is presented.
Application of model abstraction techniques to simulate transport in soils
Technology Transfer Automated Retrieval System (TEKTRAN)
Successful understanding and modeling of contaminant transport in soils is the precondition of risk-informed predictions of the subsurface contaminant transport. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing th...
Abstracting event-based control models for high autonomy systems
NASA Technical Reports Server (NTRS)
Luh, Cheng-Jye; Zeigler, Bernard P.
1993-01-01
A high autonomy system needs many models on which to base control, management, design, and other interventions. These models differ in level of abstraction and in formalism. Concepts and tools are needed to organize the models into a coherent whole. The paper deals with the abstraction processes for systematic derivation of related models for use in event-based control. The multifaceted modeling methodology is briefly reviewed. The morphism concepts needed for application to model abstraction are described. A theory for supporting the construction of DEVS models needed for event-based control is then presented. An implemented morphism on the basis of this theory is also described.
Modelling abstraction licensing strategies ahead of the UK's water abstraction licensing reform
NASA Astrophysics Data System (ADS)
Klaar, M. J.
2012-12-01
Within England and Wales, river water abstractions are licensed and regulated by the Environment Agency (EA), who uses compliance with the Environmental Flow Indicator (EFI) to ascertain where abstraction may cause undesirable effects on river habitats and species. The EFI is a percentage deviation from natural flow represented using a flow duration curve. The allowable percentage deviation changes with different flows, and also changes depending on an assessment of the sensitivity of the river to changes in flow (Table 1). Within UK abstraction licensing, resource availability is expressed as a surplus or deficit of water resources in relation to the EFI, and utilises the concept of 'hands-off-flows' (HOFs) at the specified flow statistics detailed in Table 1. Use of a HOF system enables abstraction to cease at set flows, but also enables abstraction to occur at periods of time when more water is available. Compliance at low flows (Q95) is used by the EA to determine the hydrological classification and compliance with the Water Framework Directive (WFD) for identifying waterbodies where flow may be causing or contributing to a failure in good ecological status (GES; Table 2). This compliance assessment shows where the scenario flows are below the EFI and by how much, to help target measures for further investigation and assessment. Currently, the EA is reviewing the EFI methodology in order to assess whether or not it can be used within the reformed water abstraction licensing system which is being planned by the Department for Environment, Food and Rural Affairs (DEFRA) to ensure the licensing system is resilient to the challenges of climate change and population growth, while allowing abstractors to meet their water needs efficiently, and better protect the environment. In order to assess the robustness of the EFI, a simple model has been created which allows a number of abstraction, flow and licensing scenarios to be run to determine WFD compliance using the
Vibration absorber modeling for handheld machine tool
NASA Astrophysics Data System (ADS)
Abdullah, Mohd Azman; Mustafa, Mohd Muhyiddin; Jamil, Jazli Firdaus; Salim, Mohd Azli; Ramli, Faiz Redza
2015-05-01
Handheld machine tools produce continuous vibration to the users during operation. This vibration causes harmful effects to the health of users for repeated operations in a long period of time. In this paper, a dynamic vibration absorber (DVA) is designed and modeled to reduce the vibration generated by the handheld machine tool. Several designs and models of vibration absorbers with various stiffness properties are simulated, tested and optimized in order to diminish the vibration. Ordinary differential equation is used to derive and formulate the vibration phenomena in the machine tool with and without the DVA. The final transfer function of the DVA is later analyzed using commercial available mathematical software. The DVA with optimum properties of mass and stiffness is developed and applied on the actual handheld machine tool. The performance of the DVA is experimentally tested and validated by the final result of vibration reduction.
Coupling Radar Rainfall to Hydrological Models for Water Abstraction Management
NASA Astrophysics Data System (ADS)
Asfaw, Alemayehu; Shucksmith, James; Smith, Andrea; MacDonald, Ken
2015-04-01
The impacts of climate change and growing water use are likely to put considerable pressure on water resources and the environment. In the UK, a reform to surface water abstraction policy has recently been proposed which aims to increase the efficiency of using available water resources whilst minimising impacts on the aquatic environment. Key aspects to this reform include the consideration of dynamic rather than static abstraction licensing as well as introducing water trading concepts. Dynamic licensing will permit varying levels of abstraction dependent on environmental conditions (i.e. river flow and quality). The practical implementation of an effective dynamic abstraction strategy requires suitable flow forecasting techniques to inform abstraction asset management. Potentially the predicted availability of water resources within a catchment can be coupled to predicted demand and current storage to inform a cost effective water resource management strategy which minimises environmental impacts. The aim of this work is to use a historical analysis of UK case study catchment to compare potential water resource availability using modelled dynamic abstraction scenario informed by a flow forecasting model, against observed abstraction under a conventional abstraction regime. The work also demonstrates the impacts of modelling uncertainties on the accuracy of predicted water availability over range of forecast lead times. The study utilised a conceptual rainfall-runoff model PDM - Probability-Distributed Model developed by Centre for Ecology & Hydrology - set up in the Dove River catchment (UK) using 1km2 resolution radar rainfall as inputs and 15 min resolution gauged flow data for calibration and validation. Data assimilation procedures are implemented to improve flow predictions using observed flow data. Uncertainties in the radar rainfall data used in the model are quantified using artificial statistical error model described by Gaussian distribution and
How Pupils Use a Model for Abstract Concepts in Genetics
ERIC Educational Resources Information Center
Venville, Grady; Donovan, Jenny
2008-01-01
The purpose of this research was to explore the way pupils of different age groups use a model to understand abstract concepts in genetics. Pupils from early childhood to late adolescence were taught about genes and DNA using an analogical model (the wool model) during their regular biology classes. Changing conceptual understandings of the…
Dissipation and irreversibility for models of mechanochemical machines
NASA Astrophysics Data System (ADS)
Brown, Aidan; Sivak, David
For biological systems to maintain order and achieve directed progress, they must overcome fluctuations so that reactions and processes proceed forwards more than they go in reverse. It is well known that some free energy dissipation is required to achieve irreversible forward progress, but the quantitative relationship between irreversibility and free energy dissipation is not well understood. Previous studies focused on either abstract calculations or detailed simulations that are difficult to generalize. We present results for mechanochemical models of molecular machines, exploring a range of model characteristics and behaviours. Our results describe how irreversibility and dissipation trade off in various situations, and how this trade-off can depend on details of the model. The irreversibility-dissipation trade-off points towards general principles of microscopic machine operation or process design. Our analysis identifies system parameters which can be controlled to bring performance to the Pareto frontier.
Evaluating the performance versus accuracy tradeoff for abstract models
NASA Astrophysics Data System (ADS)
McGraw, Robert M.; Clark, Joseph E.
2001-09-01
While the military and commercial communities are increasingly reliant on simulation to reduce cost, the cost of developing simulations for their complex system may be costly in themselves. In order to reduce simulation costs, simulation developers have turned toward using collaborative simulation, reusing existing simulation models, and utilizing model abstraction techniques to reduce simulation development time as well as simulation execution time. This paper focuses on model abstraction techniques that can be applied to reduce simulation execution and development time and the effects those techniques have on simulation accuracy.
Concrete Model Checking with Abstract Matching and Refinement
NASA Technical Reports Server (NTRS)
Pasareanu Corina S.; Peianek Radek; Visser, Willem
2005-01-01
We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
Particle Tracking Model and Abstraction of Transport Processes
B. Robinson
2004-10-21
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.
Of Models and Machines: Implementing Bounded Rationality.
Dick, Stephanie
2015-09-01
This essay explores the early history of Herbert Simon's principle of bounded rationality in the context of his Artificial Intelligence research in the mid 1950s. It focuses in particular on how Simon and his colleagues at the RAND Corporation translated a model of human reasoning into a computer program, the Logic Theory Machine. They were motivated by a belief that computers and minds were the same kind of thing--namely, information-processing systems. The Logic Theory Machine program was a model of how people solved problems in elementary mathematical logic. However, in making this model actually run on their 1950s computer, the JOHNNIAC, Simon and his colleagues had to navigate many obstacles and material constraints quite foreign to the human experience of logic. They crafted new tools and engaged in new practices that accommodated the affordances of their machine, rather than reflecting the character of human cognition and its bounds. The essay argues that tracking this implementation effort shows that "internal" cognitive practices and "external" tools and materials are not so easily separated as they are in Simon's principle of bounded rationality--the latter often shaping the dynamics of the former. PMID:26685521
The abstract model of dynamic evolution based on services
NASA Astrophysics Data System (ADS)
Qian, Ye; Li, Tong; Li, Yunfei; Gu, Hongxing
2012-01-01
Service-oriented software system is facing a challenge to regulate itself promptly because of the evolving Internet environment and user requirements In this paper, a new way that describe the dynamic evolution of services according to 3C mode(Will 1990) is proposed, and Extended workflow net is utilized to describe the abstract model of dynamic evolution of services from specific-functional-domain which is defined in this paper to the whole system.
The abstract model of dynamic evolution based on services
NASA Astrophysics Data System (ADS)
Qian, Ye; Li, Tong; Li, Yunfei; Gu, Hongxing
2011-12-01
Service-oriented software system is facing a challenge to regulate itself promptly because of the evolving Internet environment and user requirements In this paper, a new way that describe the dynamic evolution of services according to 3C mode(Will 1990) is proposed, and Extended workflow net is utilized to describe the abstract model of dynamic evolution of services from specific-functional-domain which is defined in this paper to the whole system.
Situation models, mental simulations, and abstract concepts in discourse comprehension.
Zwaan, Rolf A
2016-08-01
This article sets out to examine the role of symbolic and sensorimotor representations in discourse comprehension. It starts out with a review of the literature on situation models, showing how mental representations are constrained by linguistic and situational factors. These ideas are then extended to more explicitly include sensorimotor representations. Following Zwaan and Madden (2005), the author argues that sensorimotor and symbolic representations mutually constrain each other in discourse comprehension. These ideas are then developed further to propose two roles for abstract concepts in discourse comprehension. It is argued that they serve as pointers in memory, used (1) cataphorically to integrate upcoming information into a sensorimotor simulation, or (2) anaphorically integrate previously presented information into a sensorimotor simulation. In either case, the sensorimotor representation is a specific instantiation of the abstract concept. PMID:26088667
Directory of Energy Information Administration model abstracts 1988
Not Available
1988-01-01
This directory contains descriptions about each basic and auxiliary model, including the title, acronym, purpose, and type, followed by more detailed information on characteristics, uses, and requirements. For developing models, limited information is provided. Sources for additional information are identified. Included in this directory are 44 EIA models active as of February 1, 1988; 16 of which operate on personal computers. Models that run on personal computers are identified by ''PC'' as part of the acronyms. The main body of this directory is an alphabetical listing of all basic and auxiliary EIA models. Appendix A identifies major EIA modeling systems and the models within these systems, and Appendix B identifies EIA models by type (basic or auxiliary). Appendix C lists developing models and contact persons for those models. A basic model is one designated by the EIA Administrator as being sufficiently important to require sustained support and public scrutiny. An auxiliary model is one designated by the EIA Administrator as being used only occasionally in analyses, and therefore requires minimal levels of documentation. A developing model is one designated by the EIA Administrator as being under development and yet of sufficient interest to require a basic level of documentation at a future date. EIA also leases models developed by proprietary software vendors. Documentation for these ''proprietary'' models is the responsibility of the companies from which they are leased. EIA has recently leased models from Chase Econometrics, Inc., Data Resources, Inc. (DRI), the Oak Ridge National Laboratory (ORNL), and Wharton Econometric Forecasting Associates (WEFA). Leased models are not abstracted here.
Modeling quantum physics with machine learning
NASA Astrophysics Data System (ADS)
Lopez-Bezanilla, Alejandro; Arsenault, Louis-Francois; Millis, Andrew; Littlewood, Peter; von Lilienfeld, Anatole
2014-03-01
Machine Learning (ML) is a systematic way of inferring new results from sparse information. It directly allows for the resolution of computationally expensive sets of equations by making sense of accumulated knowledge and it is therefore an attractive method for providing computationally inexpensive 'solvers' for some of the important systems of condensed matter physics. In this talk a non-linear regression statistical model is introduced to demonstrate the utility of ML methods in solving quantum physics related problem, and is applied to the calculation of electronic transport in 1D channels. DOE contract number DE-AC02-06CH11357.
Entity-Centric Abstraction and Modeling Framework for Transportation Architectures
NASA Technical Reports Server (NTRS)
Lewe, Jung-Ho; DeLaurentis, Daniel A.; Mavris, Dimitri N.; Schrage, Daniel P.
2007-01-01
A comprehensive framework for representing transpportation architectures is presented. After discussing a series of preceding perspectives and formulations, the intellectual underpinning of the novel framework using an entity-centric abstraction of transportation is described. The entities include endogenous and exogenous factors and functional expressions are offered that relate these and their evolution. The end result is a Transportation Architecture Field which permits analysis of future concepts under the holistic perspective. A simulation model which stems from the framework is presented and exercised producing results which quantify improvements in air transportation due to advanced aircraft technologies. Finally, a modeling hypothesis and its accompanying criteria are proposed to test further use of the framework for evaluating new transportation solutions.
Modeling and analysis of pulse electrochemical machining
NASA Astrophysics Data System (ADS)
Wei, Bin
Pulse Electrochemical Machining (PECM) is a potentially cost effective technology meeting the increasing needs of precision manufacturing of superalloys, like titanium alloys, into complex shapes such as turbine airfoils. This dissertation reports: (1) an assessment of the worldwide state-of-the-art PECM research and industrial practice; (2) PECM process model development; (3) PECM of a superalloy (Ti-6Al-4V); and (4) key issues in future PECM research. The assessment focuses on identifying dimensional control problems with continuous ECM and how PECM can offer a solution. Previous research on PECM system design, process mechanisms, and dimensional control is analysed, leading to a clearer understanding of key issues in PECM development such as process characterization and modeling. New interelectrode gap dynamic models describing the gap evolution with time are developed for different PECM processes with an emphasis on the frontal gaps and a typical two-dimensional case. A 'PECM cosine principle' and several tool design formulae are also derived. PECM processes are characterized using concepts such as quasi-equilibrium gap and dissolution localization. Process simulation is performed to evaluate the effects of process inputs on dimensional accuracy control. Analysis is made on three types (single-phase, homogeneous, and inhomogeneous) of models concerning the physical processes (such as the electrolyte flow, Joule heating, and bubble generation) in the interelectrode gap. A physical model is introduced for the PECM with short pulses, which addresses the effect of electrolyte conductivity change on anodic dissolution. PECM of the titanium alloy is studied from a new perspective on the pulsating currents influence on surface quality and dimension control. An experimental methodology is developed to acquire instantaneous currents and to accurately measure the coefficient of machinability. The influence of pulse parameters on the surface passivation is explained based
Technology Transfer Automated Retrieval System (TEKTRAN)
This report describes the data and preliminary modeling to develop a case study of model abstraction application at the watershed scale. Model abstraction is defined as the methodology for reducing the complexity of a simulation model while maintaining the validity of the simulation results with res...
Abi-Haidar, Alaa; Kaur, Jasleen; Maguitman, Ana; Radivojac, Predrag; Rechtsteiner, Andreas; Verspoor, Karin; Wang, Zhiping; Rocha, Luis M
2008-01-01
Background: We participated in three of the protein-protein interaction subtasks of the Second BioCreative Challenge: classification of abstracts relevant for protein-protein interaction (interaction article subtask [IAS]), discovery of protein pairs (interaction pair subtask [IPS]), and identification of text passages characterizing protein interaction (interaction sentences subtask [ISS]) in full-text documents. We approached the abstract classification task with a novel, lightweight linear model inspired by spam detection techniques, as well as an uncertainty-based integration scheme. We also used a support vector machine and singular value decomposition on the same features for comparison purposes. Our approach to the full-text subtasks (protein pair and passage identification) includes a feature expansion method based on word proximity networks. Results: Our approach to the abstract classification task (IAS) was among the top submissions for this task in terms of measures of performance used in the challenge evaluation (accuracy, F-score, and area under the receiver operating characteristic curve). We also report on a web tool that we produced using our approach: the Protein Interaction Abstract Relevance Evaluator (PIARE). Our approach to the full-text tasks resulted in one of the highest recall rates as well as mean reciprocal rank of correct passages. Conclusion: Our approach to abstract classification shows that a simple linear model, using relatively few features, can generalize and uncover the conceptual nature of protein-protein interactions from the bibliome. Because the novel approach is based on a rather lightweight linear model, it can easily be ported and applied to similar problems. In full-text problems, the expansion of word features with word proximity networks is shown to be useful, although the need for some improvements is discussed. PMID:18834489
Information Model for Machine-Tool-Performance Tests
Lee, Y. Tina; Soons, Johannes A.; Donmez, M. Alkan
2001-01-01
This report specifies an information model of machine-tool-performance tests in the EXPRESS [1] language. The information model provides a mechanism for describing the properties and results of machine-tool-performance tests. The objective of the information model is a standardized, computer-interpretable representation that allows for efficient archiving and exchange of performance test data throughout the life cycle of the machine. The report also demonstrates the implementation of the information model using three different implementation methods.
Prototype-based models in machine learning.
Biehl, Michael; Hammer, Barbara; Villmann, Thomas
2016-01-01
An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. PMID:26800334
9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL ...
9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL BOARD IN CENTER, NEW MODEL BOARD AT LEFT AND MODEL SEMAPHORES AT TOP OF PHOTOGRAPH, THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA
Modeling of cumulative tool wear in machining metal matrix composites
Hung, N.P.; Tan, V.K.; Oon, B.E.
1995-12-31
Metal matrix composites (MMCs) are notoriously known for their low machinability because of the abrasive and brittle reinforcement. Although a near-net-shape product could be produced, finish machining is still required for the final shape and dimension. The classical Taylor`s tool life equation that relates tool life and cutting conditions has been traditionally used to study machinability. The turning operation is commonly used to investigate the machinability of a material; tedious and costly milling experiments have to be performed separately; while a facing test is not applicable for the Taylor`s model since the facing speed varies as the tool moves radially. Collecting intensive machining data for MMCs is often difficult because of the constraints on size, cost of the material, and the availability of sophisticated machine tools. A more flexible model and machinability testing technique are, therefore, sought. This study presents and verifies new models for turning, facing, and milling operations. Different cutting conditions were utilized to assess the machinability of MMCs reinforced with silicon carbide or alumina particles. Experimental data show that tool wear does not depend on the order of different cutting speeds since abrasion is the main wear mechanism. Correlation between data for turning, milling, and facing is presented. It is more economical to rank machinability using data for facing and then to convert the data for turning and milling, if required. Subsurface damages such as work-hardened and cracked matrix alloy, and fractured and delaminated particles are discussed.
8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL ...
8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL BOARD IN CENTER AND MODEL SEMAPHORE SIGNALS (AT TOP OF PHOTOGRAPH), THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA
Burtis, M.D.; Razuvaev, V.N.; Sivachok, S.G.
1996-10-01
This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
A probabilistic approach to aggregate induction machine modeling
Stankovic, A.M.; Lesieutre, B.C.
1996-11-01
In this paper the authors pursue probabilistic aggregate dynamical models for n identical induction machines connected to a bus, capturing the effect of different mechanical inputs to the individual machines. The authors explore model averaging and review in detail four procedures for linear models. They describe linear systems depending upon stochastic parameters, and develop a theoretical justification for a very simple and reasonably accurate averaging method. They then extend this to the nonlinear model. Finally, they use a recently introduced notion of the stochastic norm to describe a cluster of induction machines undergoing multiple simultaneous parametric variations, and obtain useful and very mildly conservative bounds on eigenstructure perturbations under multiple simultaneous parametric variations.
Limit model of electrochemical dimensional machining of metals
NASA Astrophysics Data System (ADS)
Zhitnikov, V. P.; Oshmarina, E. M.; Porechny, S. S.; Fedorova, G. I.
2014-07-01
The method of precision electrochemical machining is studied by using a model in which the current output has the form of a step function of current density. The problems of maximum stationary and quasistationary machining are formulated and solved, which made it possible to study the nonstationary process with sufficient accuracy.
Model Machine Shop for Drafting Instruction.
ERIC Educational Resources Information Center
Jackson, Carl R.
The development and implementation of a two-year interdisciplinary course integrating a machine shop and drafting curriculum are described in the report. The purpose of the course is to provide a learning process in industrial drafting featuring identifiable orientation in skills that will enable the student to develop competencies that are…
Context in Models of Human-Machine Systems
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)
1998-01-01
All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.
Developing a PLC-friendly state machine model: lessons learned
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2014-07-01
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.
Modeling situated abstraction : action coalescence via multidimensional coherence.
Sallach, D. L.; Decision and Information Sciences; Univ. of Chicago
2007-01-01
Situated social agents weigh dozens of priorities, each with its own complexities. Domains of interest are intertwined, and progress in one area either complements or conflicts with other priorities. Interpretive agents address these complexities through: (1) integrating cognitive complexities through the use of radial concepts, (2) recognizing the role of emotion in prioritizing alternatives and urgencies, (3) using Miller-range constraints to avoid oversimplified notions omniscience, and (4) constraining actions to 'moves' in multiple prototype games. Situated agent orientations are dynamically grounded in pragmatic considerations as well as intertwined with internal and external priorities. HokiPoki is a situated abstraction designed to shape and focus strategic agent orientations. The design integrates four pragmatic pairs: (1) problem and solution, (2) dependence and power, (3) constraint and affordance, and (4) (agent) intent and effect. In this way, agents are empowered to address multiple facets of a situation in an exploratory, or even arbitrary, order. HokiPoki is open to the internal orientation of the agent as it evolves, but also to the communications and actions of other agents.
Particle Tracking Model and Abstraction of Transport Processes
B. Robinson
2000-04-07
The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone.
A model for the synchronous machine using frequency response measurements
Bacalao, N.J.; Arizon, P. de; Sanchez L., R.O.
1995-02-01
This paper presents new techniques to improve the accuracy and velocity for the modeling of synchronous machines in stability and transient studies. The proposed model uses frequency responses as input data, obtained either directly from measurements or calculated from the available data. The new model is flexible as it allows changes in the detail in which the machine can be represented, and it is possible to partly compensate for the numerical errors incurred when using large integration time steps. The model can be used in transient stability and electromagnetic transient studies as secondary arc evaluation, load rejections and sub-synchronous resonance.
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Component based modelling of piezoelectric ultrasonic actuators for machining applications
NASA Astrophysics Data System (ADS)
Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.
2013-07-01
Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.
Liao, C; Quinlan, D; Panas, T
2009-10-06
General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will
Technology Transfer Automated Retrieval System (TEKTRAN)
Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...
Abstracting the principles of development using imaging and modeling
Xiong, Fengzhu; Megason, Sean G.
2015-01-01
Summary Here we look at modern developmental biology with a focus on the relationship between different approaches of investigation. We argue that direct imaging is a powerful approach not only for obtaining descriptive information but also for model generation and testing that lead to mechanistic insights. Modeling, on the other hand, conceptualizes imaging data and provides guidance to perturbations. The inquiry progresses most efficiently when a trinity of approaches—quantitative imaging (measurement), modeling (theory) and perturbation (test) —are pursued in concert, but not when one approach is dominant. Using recent studies of the zebrafish system, we show how this combination has effectively advanced classic topics in developmental biology compared to a perturbation-centric approach. Finally, we show that interdisciplinary expertise and perhaps specialization are necessary for carrying out a systematic approach, and discuss the technical hurdles. PMID:25946995
Committee of machine learning predictors of hydrological models uncertainty
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Solomatine, Dimitri
2014-05-01
In prediction of uncertainty based on machine learning methods, the results of various sampling schemes namely, Monte Carlo sampling (MCS), generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), particle swarm optimization (PSO) and adaptive cluster covering (ACCO)[1] used to build a predictive models. These models predict the uncertainty (quantiles of pdf) of a deterministic output from hydrological model [2]. Inputs to these models are the specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. For each sampling scheme three machine learning methods namely, artificial neural networks, model tree, locally weighted regression are applied to predict output uncertainties. The problem here is that different sampling algorithms result in different data sets used to train different machine learning models which leads to several models (21 predictive uncertainty models). There is no clear evidence which model is the best since there is no basis for comparison. A solution could be to form a committee of all models and to sue a dynamic averaging scheme to generate the final output [3]. This approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model HBV in the Nzoia catchment in Kenya. [1] N. Kayastha, D. L. Shrestha and D. P. Solomatine. Experiments with several methods of parameter uncertainty estimation in hydrological modeling. Proc. 9th Intern. Conf. on Hydroinformatics, Tianjin, China, September 2010. [2] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press
Simulation model for Vuilleumier cycle machines and analysis of characteristics
NASA Astrophysics Data System (ADS)
Sekiya, Hiroshi; Terada, Fusao
1992-11-01
Numerical analysis using the computer is useful in predicting and evaluating the performance of the Vuilleumier (VM) cycle machine in research and development. The 3rd-order method must be employed particularly in the case of detailed analysis of performance and design optimization. This paper describes our simulation model for the VM machine, which is based on that method. The working space is divided into thirty-eight control volumes for the VM heat pump test machine, and the fundamental equations are derived rigorously by applying the conservative equations of mass, momentum, and energy to each control volume, using staggered mesh. These equations are solved simultaneously by the Adams-Moulton method. Then, the test machine is investigated in terms of the pressure and temperature fluctuations of the working gas, the energy flow, and the performance at each speed of revolution. The calculated results are examined in comparison with the experimental ones.
Phase Transitions in a Model of Y-Molecules Abstract
NASA Astrophysics Data System (ADS)
Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James
Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.
Parallel phase model : a programming model for high-end parallel machines with manycores.
Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian
2009-04-01
This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.
A Model Expert System For Machine Failure Diagnosis (MED)
NASA Astrophysics Data System (ADS)
Liqun, Yin
1987-05-01
MED is a model expert system for machine failure diagnosis. MED can help the repairer quickly determine milling machine electrical failure. The key points in MED are a simple method to deal with the "subsequent visit" problem in machine failure diagnosis, a weighted list to interfere in the control of AGENDA to imitate an expert's continuous thinking process and to keep away erratic questioning and problem running away caused by probabilistic reasoning, the structuralized AGENDA, the characteristics of machine failure diagnosis and people's thinking pattern in faulure diagnosis. The structuralized AGENDA gives an idea to supply a more powerful as well as flexible control strategy in best-first search by using AGENDA. The "subsequent visit" problem is a very complicated task to solve, it will be convenient to deal with it by using a simple method to keep from consuming too much time in urgent situations. Weighted list also gives a method to improve control in inference of expert system. The characteristics of machine failure diagnosis and people's thinking pattern are both important for building a machine failure diagnosis expert system. When being told failure phenomena, MED can determine failure causes through dialogue. MED is written in LISP and run in UNIVAC 1100/10 and IBM PC/XT computers. The average diagnosis time per failure is 11 seconds to CPU, 2 minites to terminal operation, and 11 minites to a skilful repairer.
Hydro- abrasive jet machining modeling for computer control and optimization
NASA Astrophysics Data System (ADS)
Groppetti, R.; Jovane, F.
1993-06-01
Use of hydro-abrasive jet machining (HAJM) for machining a wide variety of materials—metals, poly-mers, ceramics, fiber-reinforced composites, metal-matrix composites, and bonded or hybridized mate-rials—primarily for two- and three-dimensional cutting and also for drilling, turning, milling, and deburring, has been reported. However, the potential of this innovative process has not been explored fully. This article discusses process control, integration, and optimization of HAJM to establish a plat-form for the implementation of real-time adaptive control constraint (ACC), adaptive control optimiza-tion (ACO), and CAD/CAM integration. It presents the approach followed and the main results obtained during the development, implementation, automation, and integration of a HAJM cell and its computer-ized controller. After a critical analysis of the process variables and models reported in the literature to identify process variables and to define a process model suitable for HAJM real-time control and optimi-zation, to correlate process variables and parameters with machining results, and to avoid expensive and time-consuming experiments for determination of the optimal machining conditions, a process predic-tion and optimization model was identified and implemented. Then, the configuration of the HAJM cell, architecture, and multiprogramming operation of the controller in terms of monitoring, control, process result prediction, and process condition optimization were analyzed. This prediction and optimization model for selection of optimal machining conditions using multi-objective programming was analyzed. Based on the definition of an economy function and a productivity function, with suitable constraints relevant to required machining quality, required kerfing depth, and available resources, the model was applied to test cases based on experimental results.
Closure modeling using field inversion and machine learning
NASA Astrophysics Data System (ADS)
Duraisamy, Karthik
2015-11-01
The recent acceleration in computational power and measurement resolution has made possible the availability of extreme scale simulations and data sets. In this work, a modeling paradigm that seeks to comprehensively harness large scale data is introduced, with the aim of improving closure models. Full-field inversion (in contrast to parameter estimation) is used to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These machine-learned functional forms are then used to augment the closure model in predictive computations. The approach is demonstrated to be able to successfully reconstruct functional corrections and yield predictions with quantified uncertainties in a range of turbulent flows.
Control of discrete event systems modeled as hierarchical state machines
NASA Technical Reports Server (NTRS)
Brave, Y.; Heymann, M.
1991-01-01
The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.
Analytical model for force prediction when machining metal matrix composites
NASA Astrophysics Data System (ADS)
Sikder, Snahungshu
Metal Matrix Composites (MMC) offer several thermo-mechanical advantages over standard materials and alloys which make them better candidates in different applications. Their light weight, high stiffness, and strength have attracted several industries such as automotive, aerospace, and defence for their wide range of products. However, the wide spread application of Meal Matrix Composites is still a challenge for industry. The hard and abrasive nature of the reinforcement particles is responsible for rapid tool wear and high machining costs. Fracture and debonding of the abrasive reinforcement particles are the considerable damage modes that directly influence the tool performance. It is very important to find highly effective way to machine MMCs. So, it is important to predict forces when machining Metal Matrix Composites because this will help to choose perfect tools for machining and ultimately save both money and time. This research presents an analytical force model for predicting the forces generated during machining of Metal Matrix Composites. In estimating the generated forces, several aspects of cutting mechanics were considered including: shearing force, ploughing force, and particle fracture force. Chip formation force was obtained by classical orthogonal metal cutting mechanics and the Johnson-Cook Equation. The ploughing force was formulated while the fracture force was calculated from the slip line field theory and the Griffith theory of failure. The predicted results were compared with previously measured data. The results showed very good agreement between the theoretically predicted and experimentally measured cutting forces.
Applying Machine Trust Models to Forensic Investigations
NASA Astrophysics Data System (ADS)
Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin
Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.
Three dimensional CAD model of the Ignitor machine
NASA Astrophysics Data System (ADS)
Orlandi, S.; Zanaboni, P.; Macco, A.; Sioli, V.; Risso, E.
1998-11-01
defind The final, global product of all the structural and thermomechanical design activities is a complete three dimensional CAD (AutoCAD and Intergraph Design Review) model of the IGNITOR machine. With this powerful tool, any interface, modification, or upgrading of the machine design is managed as an integrated part of the general effort aimed at the construction of the Ignitor facility. ind The activities that are underway, to complete the design of the core of the experiment and that will be described, concern the following: ind - the cryogenic cooling system, ind - the radial press, the center post, the mechanical supports (legs) of the entire machine, ind - the inner mechanical supports of major components such as the plasma chamber and the outer poloidal field coils.
Global ocean modeling on the Connection Machine
Smith, R.D.; Dukowicz, J.K.; Malone, R.C.
1993-10-01
The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ``barotropic`` equations describing the vertically-averaged flow, and a set of three-dimensional ``baroclinic`` equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow.
Thermal-mechanical modeling of laser ablation hybrid machining
NASA Astrophysics Data System (ADS)
Matin, Mohammad Kaiser
2001-08-01
Hard, brittle and wear-resistant materials like ceramics pose a problem when being machined using conventional machining processes. Machining ceramics even with a diamond cutting tool is very difficult and costly. Near net-shape processes, like laser evaporation, produce micro-cracks that require extra finishing. Thus it is anticipated that ceramic machining will have to continue to be explored with new-sprung techniques before ceramic materials become commonplace. This numerical investigation results from the numerical simulations of the thermal and mechanical modeling of simultaneous material removal from hard-to-machine materials using both laser ablation and conventional tool cutting utilizing the finite element method. The model is formulated using a two dimensional, planar, computational domain. The process simulation acronymed, LAHM (Laser Ablation Hybrid Machining), uses laser energy for two purposes. The first purpose is to remove the material by ablation. The second purpose is to heat the unremoved material that lies below the ablated material in order to ``soften'' it. The softened material is then simultaneously removed by conventional machining processes. The complete solution determines the temperature distribution and stress contours within the material and tracks the moving boundary that occurs due to material ablation. The temperature distribution is used to determine the distance below the phase change surface where sufficient ``softening'' has occurred, so that a cutting tool may be used to remove additional material. The model incorporated for tracking the ablative surface does not assume an isothermal melt phase (e.g. Stefan problem) for laser ablation. Both surface absorption and volume absorption of laser energy as function of depth have been considered in the models. LAHM, from the thermal and mechanical point of view is a complex machining process involving large deformations at high strain rates, thermal effects of the laser, removal of
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Problems in modeling man machine control behavior in biodynamic environments
NASA Technical Reports Server (NTRS)
Jex, H. R.
1972-01-01
Reviewed are some current problems in modeling man-machine control behavior in a biodynamic environment. It is given in two parts: (1) a review of the models which are appropriate for manual control behavior and the added elements necessary to deal with biodynamic interfaces; and (2) a review of some biodynamic interface pilot/vehicle problems which have occurred, been solved, or need to be solved.
Abstract Model of the SATS Concept of Operations: Initial Results and Recommendations
NASA Technical Reports Server (NTRS)
Dowek, Gilles; Munoz, Cesar; Carreno, Victor A.
2004-01-01
An abstract mathematical model of the concept of operations for the Small Aircraft Transportation System (SATS) is presented. The Concept of Operations consist of several procedures that describe nominal operations for SATS, Several safety properties of the system are proven using formal techniques. The final goal of the verification effort is to show that under nominal operations, aircraft are safely separated. The abstract model was written and formally verified in the Prototype Verification System (PVS).
Bilingual Cluster Based Models for Statistical Machine Translation
NASA Astrophysics Data System (ADS)
Yamamoto, Hirofumi; Sumita, Eiichiro
We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.
Knowledge in formation: The machine-modeled frame of mind
Shore, B.
1996-12-31
Artificial Intelligence researchers have used the digital computer as a model for the human mind in two different ways. Most obviously, the computer has been used as a tool on which simulations of thinking-as-programs are developed and tested. Less obvious, but of great significance, is the use of the computer as a conceptual model for the human mind. This essay traces the sources of this machine-modeled conception of cognition in a great variety of social institutions and everyday experienced treating them as {open_quotes}cultural models{close_quotes} which have contributed to the naturalness of The mine-as-machine paradigm for many Americans. The roots of these models antedate the actual development of modern computers, and take the form of a {open_quotes}modularity schema{close_quotes} that has shaped the cultural and cognitive landscape of modernity. The essay concludes with a consideration of some of the cognitive consequences of this extension of machine logic into modern life, and proposes an important distinction between information processing models of thought and meaning-making in how human cognition is conceptualized.
Multiple measurement models of articulated arm coordinate measuring machines
NASA Astrophysics Data System (ADS)
Zheng, Dateng; Xiao, Zhongyue; Xia, Xiang
2015-09-01
The existing articulated arm coordinate measuring machines(AACMM) with one measurement model are easy to cause low measurement accuracy because the whole sampling space is much bigger than the result in the unstable calibration parameters. To compensate for the deficiency of one measurement model, the multiple measurement models are built by the Denavit-Hartenberg's notation, the homemade standard rod components are used as a calibration tool and the Levenberg-Marquardt calibration algorithm is applied to solve the structural parameters in the measurement models. During the tests of multiple measurement models, the sample areas are selected in two situations. It is found that the measurement errors' sigma value(0.083 4 mm) dealt with one measurement model is nearly two times larger than that of the multiple measurement models(0.043 1 mm) in the same sample area. While in the different sample area, the measurement errors' sigma value(0.054 0 mm) dealt with the multiple measurement models is about 40% of one measurement model(0.137 3 mm). The preliminary results suggest that the measurement accuracy of AACMM dealt with multiple measurement models is superior to the accuracy of the existing machine with one measurement model. This paper proposes the multiple measurement models to improve the measurement accuracy of AACMM without increasing any hardware cost.
Tracer transport in soils and shallow groundwater: model abstraction with modern tools
Technology Transfer Automated Retrieval System (TEKTRAN)
Vadose zone controls contaminant transport from the surface to groundwater, and modeling transport in vadose zone has become a burgeoning field. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing the complexity of a...
Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios T.
2015-12-01
Machine learning and geostatistics are powerful mathematical frameworks for modeling spatial data. Both approaches, however, suffer from poor scaling of the required computational resources for large data applications. We present the Stochastic Local Interaction (SLI) model, which employs a local representation to improve computational efficiency. SLI combines geostatistics and machine learning with ideas from statistical physics and computational geometry. It is based on a joint probability density function defined by an energy functional which involves local interactions implemented by means of kernel functions with adaptive local kernel bandwidths. SLI is expressed in terms of an explicit, typically sparse, precision (inverse covariance) matrix. This representation leads to a semi-analytical expression for interpolation (prediction), which is valid in any number of dimensions and avoids the computationally costly covariance matrix inversion.
97. View of International Business Machine (IBM) digital computer model ...
97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Modeling the Swift BAT Trigger Algorithm with Machine Learning
NASA Astrophysics Data System (ADS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2016-02-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of ≳97% (≲3% error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6% (10.4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of {n}0∼ {0.48}-0.23+0.41 {{{Gpc}}}-3 {{{yr}}}-1 with power-law indices of {n}1∼ {1.7}-0.5+0.6 and {n}2∼ -{5.9}-0.1+5.7 for GRBs above and below a break point of {z}1∼ {6.8}-3.2+2.8. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.
Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen
2014-12-18
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less
Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; Sheng, Shuangwen
2014-12-18
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issue is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2012-01-01
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. PMID:21130079
Model-Driven Engineering of Machine Executable Code
NASA Astrophysics Data System (ADS)
Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira
Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.
Modeling of Passive Forces of Machine Tool Covers
NASA Astrophysics Data System (ADS)
Kolar, Petr; Hudec, Jan; Sulitka, Matej
The passive forces acting against the drive force are phenomena that influence dynamical properties and precision of linear axes equipped with feed drives. Covers are one of important sources of passive forces in machine tools. The paper describes virtual evaluation of cover passive forces using the cover complex model. The model is able to compute interaction between flexible cover segments and sealing wiper. The result is deformation of cover segments and wipers which is used together with measured friction coefficient for computation of cover total passive force. This resulting passive force is dependent on cover position. Comparison of computational results and measurement on the real cover is presented in the paper.
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.
K. Mon
2001-08-29
This analyses and models report (AMR) was conducted in response to written work direction (CRWMS M and O 1999a). ICN 01 of this AMR was developed following guidelines provided in TWP-MGR-MD-000004 REV 01, ''Technical Work Plan for: Integrated Management of Technical Product Input Department'' (BSC 2001, Addendum B). The purpose and scope of this AMR is to review and analyze upstream process-level models (CRWMS M and O 2000a and CRWMS M and O 2000b) and information relevant to pitting and crevice corrosion degradation of waste package outer barrier (Alloy 22) and drip shield (Titanium Grade 7) materials, and to develop abstractions of the important processes in a form that is suitable for input to the WAPDEG analysis for long-term degradation of waste package outer barrier and drip shield in the repository. The abstraction is developed in a manner that ensures consistency with the process-level models and information and captures the essential behavior of the processes represented. Also considered in the model abstraction are the probably range of exposure conditions in emplacement drifts and local exposure conditions on drip shield and waste package surfaces. The approach, method, and assumptions that are employed in the model abstraction are documented and justified.
J.D. Schreiber
2006-12-08
This technical work plan (TWP) describes work activities to be performed by the Near-Field Environment Team. The objective of the work scope covered by this TWP is to generate Revision 03 of EBS Radionuclide Transport Abstraction, referred to herein as the radionuclide transport abstraction (RTA) report. The RTA report is being revised primarily to address condition reports (CRs), to address issues identified by the Independent Validation Review Team (IVRT), to address the potential impact of transport, aging, and disposal (TAD) canister design on transport models, and to ensure integration with other models that are closely associated with the RTA report and being developed or revised in other analysis/model reports in response to IVRT comments. The RTA report will be developed in accordance with the most current version of LP-SIII.10Q-BSC and will reflect current administrative procedures (LP-3.15Q-BSC, ''Managing Technical Product Inputs''; LP-SIII.2Q-BSC, ''Qualification of Unqualified Data''; etc.), and will develop related Document Input Reference System (DIRS) reports and data qualifications as applicable in accordance with prevailing procedures. The RTA report consists of three models: the engineered barrier system (EBS) flow model, the EBS transport model, and the EBS-unsaturated zone (UZ) interface model. The flux-splitting submodel in the EBS flow model will change, so the EBS flow model will be validated again. The EBS transport model and validation of the model will be substantially revised in Revision 03 of the RTA report, which is the main subject of this TWP. The EBS-UZ interface model may be changed in Revision 03 of the RTA report due to changes in the conceptualization of the UZ transport abstraction model (a particle tracker transport model based on the discrete fracture transfer function will be used instead of the dual-continuum transport model previously used). Validation of the EBS-UZ interface model will be revised to be consistent with
A dynamic model for material removal in ultrasonic machining
Wang, Z.Y.; Rojurkar, K.P.
1995-12-31
This paper proposes a dynamic model of the material removal mechanism and provides a relationship between material removal rate and operation parameters in ultrasonic machining (USM). The model incorporates effect of high values of vibration amplitude, frequency and grit size. The effect of non-uniformity of abrasive grits is also considered by using a probability distribution for the diameter of the abrasive particles. The model is able to predict accurately the increasing rate of material removal for increasing values of amplitude and frequency. It can also be used to determine the reducing rate of material removal, after a certain maximum level is attained, for further increments of vibration amplitude and frequency. Equations representing the dynamic normal stress and elastic displacement of work-piece caused by the impact of an arbitrary grit are used in developing a model considering the dynamic impact phenomena of grits on the work-piece. The analysis shows that there is an effective speed zone for the tool. Within this range, grits in the cutting zone can obtain the maximum momentum and energy from the tool. During the machining process, only those grits whose sizes are in the range of the effective speed zone, can abrade work-piece most effectively.
Geochemistry Model Abstraction and Sensitivity Studies for the 21 PWR CSNF Waste Package
P. Bernot; S. LeStrange; E. Thomas; K. Zarrabi; S. Arthur
2002-10-29
The CSNF geochemistry model abstraction, as directed by the TWP (BSC 2002b), was developed to provide regression analysis of EQ6 cases to obtain abstracted values of pH (and in some cases HCO{sub 3}{sup -} concentration) for use in the Configuration Generator Model. The pH of the system is the controlling factor over U mineralization, CSNF degradation rate, and HCO{sub 3}{sup -} concentration in solution. The abstraction encompasses a large variety of combinations for the degradation rates of materials. The ''base case'' used EQ6 simulations looking at differing steel/alloy corrosion rates, drip rates, and percent fuel exposure. Other values such as the pH/HCO{sub 3}{sup -} dependent fuel corrosion rate and the corrosion rate of A516 were kept constant. Relationships were developed for pH as a function of these differing rates to be used in the calculation of total C and subsequently, the fuel rate. An additional refinement to the abstraction was the addition of abstracted pH values for cases where there was limited O{sub 2} for waste package corrosion and a flushing fluid other than J-13, which has been used in all EQ6 calculation up to this point. These abstractions also used EQ6 simulations with varying combinations of corrosion rates of materials to abstract the pH (and HCO{sub 3}{sup -} in the case of the limiting O{sub 2} cases) as a function of WP materials corrosion rates. The goodness of fit for most of the abstracted values was above an R{sup 2} of 0.9. Those below this value occurred during the time at the very beginning of WP corrosion when large variations in the system pH are observed. However, the significance of F-statistic for all the abstractions showed that the variable relationships are significant. For the abstraction, an analysis of the minerals that may form the ''sludge'' in the waste package was also presented. This analysis indicates that a number a different iron and aluminum minerals may form in the waste package other than those
Support Vector Machines for Petrophysical Modelling and Lithoclassification
NASA Astrophysics Data System (ADS)
Al-Anazi, Ammal Fannoush Khalifah
2011-12-01
Given increasing challenges of oil and gas production from partially depleted conventional or unconventional reservoirs, reservoir characterization is a key element of the reservoir development workflow. Reservoir characterization impacts well placement, injection and production strategies, and field management. Reservoir characterization projects point and line data to a large three-dimensional volume. The relationship between variables, e.g. porosity and permeability, is often established by regression yet the complexities between measured variables often lead to poor correlation coefficients between the regressed variables. Recent advances in machine learning methods have provided attractive alternatives for constructing interpretation models of rock properties in heterogeneous reservoirs. Here, Support Vector Machines (SVMs), a class of a learning machine that is formulated to output regression models and classifiers of competitive generalization capability, has been explored to determine its capabilities for determining the relationship, both in regression and in classification, between reservoir rock properties. This thesis documents research on the capability of SVMs to model petrophysical and elastic properties in heterogeneous sandstone and carbonate reservoirs. Specifically, the capabilities of SVM regression and classification has been examined and compared to neural network-based methods, namely multilayered neural networks, radial basis function neural networks, general regression neural networks, probabilistic neural networks, and linear discriminant analysis. The petrophysical properties that have been evaluated include porosity, permeability, Poisson's ratio and Young's modulus. Statistical error analysis reveals that the SVM method yields comparable or superior predictions of petrophysical and elastic rock properties and classification of the lithology compared to neural networks. The SVM method also shows uniform prediction capability under the
Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.
Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole
2015-07-14
Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal. PMID:26575759
Modelling fate and transport of pesticides in river catchments with drinking water abstractions
NASA Astrophysics Data System (ADS)
Desmet, Nele; Seuntjens, Piet; Touchant, Kaatje
2010-05-01
When drinking water is abstracted from surface water, the presence of pesticides may have a large impact on the purification costs. In order to respect imposed thresholds at points of drinking water abstraction in a river catchment, sustainable pesticide management strategies might be required in certain areas. To improve management strategies, a sound understanding of the emission routes, the transport, the environmental fate and the sources of pesticides is needed. However, pesticide monitoring data on which measures are founded, are generally scarce. Data scarcity hampers the interpretation and the decision making. In such a case, a modelling approach can be very useful as a tool to obtain complementary information. Modelling allows to take into account temporal and spatial variability in both discharges and concentrations. In the Netherlands, the Meuse river is used for drinking water abstraction and the government imposes the European drinking water standard for individual pesticides (0.1 ?g.L-1) for surface waters at points of drinking water abstraction. The reported glyphosate concentrations in the Meuse river frequently exceed the standard and this enhances the request for targeted measures. In this study, a model for the Meuse river was developed to estimate the contribution of influxes at the Dutch-Belgian border on the concentration levels detected at the drinking water intake 250 km downstream and to assess the contribution of the tributaries to the glyphosate loads. The effects of glyphosate decay on environmental fate were considered as well. Our results show that the application of a river model allows to asses fate and transport of pesticides in a catchment in spite of monitoring data scarcity. Furthermore, the model provides insight in the contribution of different sub basins to the pollution level. The modelling results indicate that the effect of local measures to reduce pesticides concentrations in the river at points of drinking water
Modeling of Unsteady Three-dimensional Flows in Multistage Machines
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.; Pratt, Edmund T., Jr.; Kurkov, Anatole (Technical Monitor)
2003-01-01
Despite many years of development, the accurate and reliable prediction of unsteady aerodynamic forces acting on turbomachinery blades remains less than satisfactory, especially when viewed next to the great success investigators have had in predicting steady flows. Hall and Silkowski (1997) have proposed that one of the main reasons for the discrepancy between theory and experiment and/or industrial experience is that many of the current unsteady aerodynamic theories model a single blade row in an infinitely long duct, ignoring potentially important multistage effects. However, unsteady flows are made up of acoustic, vortical, and entropic waves. These waves provide a mechanism for the rotors and stators of multistage machines to communicate with one another. In other words, wave behavior makes unsteady flows fundamentally a multistage (and three-dimensional) phenomenon. In this research program, we have has as goals (1) the development of computationally efficient computer models of the unsteady aerodynamic response of blade rows embedded in a multistage machine (these models will ultimately be capable of analyzing three-dimensional viscous transonic flows), and (2) the use of these computer codes to study a number of important multistage phenomena.
Modeling the Swift BAT Trigger Algorithm with Machine Learning
NASA Technical Reports Server (NTRS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Applications and modelling of bulk HTSs in brushless ac machines
NASA Astrophysics Data System (ADS)
Barnes, G. J.; McCulloch, M. D.; Dew-Hughes, D.
2000-06-01
The use of high temperature superconducting material in its bulk form for engineering applications is attractive due to the large power densities that can be achieved. In brushless electrical machines, there are essentially four properties that can be exploited; their hysteretic nature, their flux shielding properties, their ability to trap large flux densities and their ability to produce levitation. These properties translate to hysteresis machines, reluctance machines, trapped-field synchronous machines and linear motors respectively. Each one of these machines is addressed separately and computer simulations that reveal the current and field distributions within the machines are used to explain their operation.
Machine vision algorithm generation using human visual models
NASA Astrophysics Data System (ADS)
Daley, Wayne D.; Doll, Theodore J.; McWhorter, Shane W.; Wasilewski, Anthony A.
1999-01-01
The design of robust machine vision algorithms is one of the most difficult parts of developing and integrating automated systems. Historically, most of the techniques have been developed using ad hoc methodologies. This problem is more severe in the area of natural/biological products. In this arena, it has been difficult to capture and model the natural variability to be expected in the products. This present difficulty in performing quality and process control in the meat, fruit and vegetable industries. While some systems have been introduced, they do not adequately address the wide range of needs. This paper will propose an algorithm development technique that utilizes modes of the human visual system. It will address that subset of problems that humans perform well, but have proven difficult to automate with the standard machine vision techniques. The basis of the technique evaluation will be the Georgia Tech Vision model. This approach demonstrates a high level of accuracy in its ability to solve difficult problems. This paper will present the approach, the result, and possibilities for implementation.
Global atmospheric and ocean modeling on the connection machine
Atlas, S.R.
1993-12-01
This paper describes the high-level architecture of two parallel global climate models: an atmospheric model based on the Geophysical Fluid Dynamics Laboratory (GFDL) SKYHI model, and an ocean model descended from the Bryan-Cox-Semtner ocean general circulation model. These parallel models are being developed as part of a long-term research collaboration between Los Alamos National Laboratory (LANL) and the GFDL. The goal of this collaboration is to develop parallel global climate models which are modular in structure, portable across a wide variety of machine architectures and programming paradigms, and provide an appropriate starting point for a fully coupled model. Several design considerations have emerged as central to achieving these goals. These include the expression of the models in terms of mathematical primitives such as stencil operators, to facilitate performance optimization on different computational platforms; the isolation of communication from computation to allow flexible implementation of a single code under message-passing or data parallel programming paradigms; and judicious memory management to achieve modularity without memory explosion costs.
Modelling the structure and function of enzymes by machine learning.
Sternberg, M J; Lewis, R A; King, R D; Muggleton, S
1992-01-01
A machine learning program, GOLEM, has been applied to two problems: (1) the prediction of protein secondary structure from sequence and (2) modelling a quantitative structure-activity relationship in drug design. GOLEM takes as input observations and combines them with background knowledge of chemistry to yield rules expressed as stereochemical principles for prediction. The secondary structure prediction was explored on the alpha/alpha class of proteins; on an unrelated test set it yielded 81% accuracy. The rules from GOLEM defined patterns of residues forming alpha-helices. The system studied for drug design was the activities of trimethoprim analogues binding to E. coli dihydrofolate reductase. The GOLEM rules were a better model than standard regression approaches. More importantly, these rules described the chemical properties of the enzyme-binding site that were in broad agreement with the crystallographic structure. PMID:1290938
Modeling the meaning of words: neural correlates of abstract and concrete noun processing.
Mårtensson, Frida; Roll, Mikael; Apt, Pia; Horne, Merle
2011-01-01
We present a model relating analysis of abstract and concrete word meaning in terms of semantic features and contextual frames within a general framework of neurocognitive information processing. The approach taken here assumes concrete noun meanings to be intimately related to sensory feature constellations. These features are processed by posterior sensory regions of the brain, e.g. the occipital lobe, which handles visual information. The interpretation of abstract nouns, however, is likely to be more dependent on semantic frames and linguistic context. A greater involvement of more anteriorly located, perisylvian brain areas has previously been found for the processing of abstract words. In the present study, a word association test was carried out in order to compare semantic processing in healthy subjects (n=12) with subjects with aphasia due to perisylvian lesions (n=3) and occipital lesions (n=1). The word associations were coded into different categories depending on their semantic content. A double dissociation was found, where, compared to the controls, the perisylvian aphasic subjects had problems associating to abstract nouns and produced fewer semantic framebased associations, whereas the occipital aphasic subject showed disturbances in concrete noun processing and made fewer semantic feature based associations. PMID:22237493
Paraskevas, Paschalis D; Sabbe, Maarten K; Reyniers, Marie-Françoise; Papayannakos, Nikos G; Marin, Guy B
2014-10-01
Hydrogen-abstraction reactions play a significant role in thermal biomass conversion processes, as well as regular gasification, pyrolysis, or combustion. In this work, a group additivity model is constructed that allows prediction of reaction rates and Arrhenius parameters of hydrogen abstractions by hydrogen atoms from alcohols, ethers, esters, peroxides, ketones, aldehydes, acids, and diketones in a broad temperature range (300-2000 K). A training set of 60 reactions was developed with rate coefficients and Arrhenius parameters calculated by the CBS-QB3 method in the high-pressure limit with tunneling corrections using Eckart tunneling coefficients. From this set of reactions, 15 group additive values were derived for the forward and the reverse reaction, 4 referring to primary and 11 to secondary contributions. The accuracy of the model is validated upon an ab initio and an experimental validation set of 19 and 21 reaction rates, respectively, showing that reaction rates can be predicted with a mean factor of deviation of 2 for the ab initio and 3 for the experimental values. Hence, this work illustrates that the developed group additive model can be reliably applied for the accurate prediction of kinetics of α-hydrogen abstractions by hydrogen atoms from a broad range of oxygenates. PMID:25209711
Asquith, William H.; Roussel, Meghan C.
2007-01-01
Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is
Asquith, William H.; Roussel, Meghan C.
2007-01-01
Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is
Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
The abstract geometry modeling language (AgML): experience and road map toward eRHIC
NASA Astrophysics Data System (ADS)
Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor
2014-06-01
The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.
Identifying crop vulnerability to groundwater abstraction: modelling and expert knowledge in a GIS.
Procter, Chris; Comber, Lex; Betson, Mark; Buckley, Dennis; Frost, Andy; Lyons, Hester; Riding, Alison; Voyce, Kevin
2006-11-01
Water use is expected to increase and climate change scenarios indicate the need for more frequent water abstraction. Abstracting groundwater may have a detrimental effect on soil moisture availability for crop growth and yields. This work presents an elegant and robust method for identifying zones of crop vulnerability to abstraction. Archive groundwater level datasets were used to generate a composite groundwater surface that was subtracted from a digital terrain model. The result was the depth from surface to groundwater and identified areas underlain by shallow groundwater. Knowledge from an expert agronomist was used to define classes of risk in terms of their depth below ground level. Combining information on the permeability of geological drift types further refined the assessment of the risk of crop growth vulnerability. The nature of the mapped output is one that is easy to communicate to the intended farming audience because of the general familiarity of mapped information. Such Geographic Information System (GIS)-based products can play a significant role in the characterisation of catchments under the EU Water Framework Directive especially in the process of public liaison that is fundamental to the setting of priorities for management change. The creation of a baseline allows the impact of future increased water abstraction rates to be modelled and the vulnerability maps are in a format that can be readily understood by the various stakeholders. This methodology can readily be extended to encompass additional data layers and for a range of groundwater vulnerability issues including water resources, ecological impacts, nitrate and phosphorus. PMID:16963176
Modeling the Virtual Machine Launching Overhead under Fermicloud
Garzoglio, Gabriele; Wu, Hao; Ren, Shangping; Timm, Steven; Bernabeu, Gerard; Noh, Seo-Young
2014-11-12
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.
Machine learning and cosmological simulations - I. Semi-analytical models
NASA Astrophysics Data System (ADS)
Kamdar, Harshil M.; Turk, Matthew J.; Brunner, Robert J.
2016-01-01
We present a new exploratory framework to model galaxy formation and evolution in a hierarchical Universe by using machine learning (ML). Our motivations are two-fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analysing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test various sophisticated ML algorithms (k-Nearest Neighbors, decision trees, random forests, and extremely randomized trees). By using only essential dark matter halo physical properties for haloes of M > 1012 M⊙ and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenological framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Access, Equity, and Opportunity. Women in Machining: A Model Program.
ERIC Educational Resources Information Center
Warner, Heather
The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…
Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study
ERIC Educational Resources Information Center
Cer, Daniel
2011-01-01
The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…
Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules
Chowdhury, Debashish
2013-01-01
A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505
Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.
ERIC Educational Resources Information Center
Technology Management Corp., Alexandria, VA.
A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of
NASA Astrophysics Data System (ADS)
Klaar, Megan; Laize, Cedric; Maddock, Ian; Acreman, Mike; Tanner, Kath; Peet, Sarah
2014-05-01
A key challenge for environmental managers is the determination of environmental flows which allow a maximum yield of water resources to be taken from surface and sub-surface sources, whilst ensuring sufficient water remains in the environment to support biota and habitats. It has long been known that sensitivity to changes in water levels resulting from river and groundwater abstractions varies between rivers. Whilst assessment at the catchment scale is ideal for determining broad pressures on water resources and ecosystems, assessment of the sensitivity of reaches to changes in flow has previously been done on a site-by-site basis, often with the application of detailed but time consuming techniques (e.g. PHABSIM). While this is appropriate for a limited number of sites, it is costly in terms of money and time resources and therefore not appropriate for application at a national level required by responsible licensing authorities. To address this need, the Environment Agency (England) is developing an operational tool to predict relationships between physical habitat and flow which may be applied by field staff to rapidly determine the sensitivity of physical habitat to flow alteration for use in water resource management planning. An initial model of river sensitivity to abstraction (defined as the change in physical habitat related to changes in river discharge) was developed using site characteristics and data from 66 individual PHABSIM surveys throughout the UK (Booker & Acreman, 2008). By applying a multivariate multiple linear regression analysis to the data to define habitat availability-flow curves using resource intensity as predictor variables, the model (known as RAPHSA- Rapid Assessment of Physical Habitat Sensitivity to Abstraction) is able to take a risk-based approach to modeled certainty. Site specific information gathered using desk-based, or a variable amount of field work can be used to predict the shape of the habitat- flow curves, with the
Experimental "evolutional machines": mathematical and experimental modeling of biological evolution
NASA Astrophysics Data System (ADS)
Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.
Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological
G. Ragan
2001-12-19
The purpose of the inventory abstraction, which has been prepared in accordance with a technical work plan (CRWMS M&O 2000e for ICN 02 of the present analysis, and BSC 2001e for ICN 03 of the present analysis), is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M&O 2000c, 2000f). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) key technical issue (KTI): ''The rate at which radionuclides in SNF [spent nuclear fuel] are released from the EBS [engineered barrier system] through the oxidation and dissolution of spent fuel'' (NRC 1999, Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release to the
A real-time model of the synchronous machine based on digital signal processors
Do, Vanque; Barry, A.O. )
1993-02-01
A real-time digital model of a complete hydraulic synchronous machine is presented. The model is based on parallel processing using digital-signal processors (DSP) for fast calculation. The paper describes the modeling of the machine using block diagrams to represent the generator, voltage regulator, stabilizer, turbine, penstock and governor. Details of the hardware and software used to implement the real-time model of the machine are given. A first series of tests has been done and results are shown to evaluate the steady-state and transient performance of the model.
Mathematical modeling of synergetic aspects of machine building enterprise management
NASA Astrophysics Data System (ADS)
Kazakov, O. D.; Andriyanov, S. V.
2016-04-01
The multivariate method of determining the optimal values of leading key performance indicators of production divisions of machine-building enterprises in the aspect of synergetics has been worked out.
DFT modeling of chemistry on the Z machine
NASA Astrophysics Data System (ADS)
Mattsson, Thomas
2013-06-01
Density Functional Theory (DFT) has proven remarkably accurate in predicting properties of matter under shock compression for a wide-range of elements and compounds: from hydrogen to xenon via water. Materials where chemistry plays a role are of particular interest for many applications. For example the deep interiors of Neptune, Uranus, and hundreds of similar exoplanets are composed of molecular ices of carbon, hydrogen, oxygen, and nitrogen at pressures of several hundred GPa and temperatures of many thousand Kelvin. High-quality thermophysical experimental data and high-fidelity simulations including chemical reaction are necessary to constrain planetary models over a large range of conditions. As examples of where chemical reactions are important, and demonstration of the high fidelity possible for these both structurally and chemically complex systems, we will discuss shock- and re-shock of liquid carbon dioxide (CO2) in the range 100 to 800 GPa, shock compression of the hydrocarbon polymers polyethylene (PE) and poly(4-methyl-1-pentene) (PMP), and finally simulations of shock compression of glow discharge polymer (GDP) including the effects of doping with germanium. Experimental results from Sandia's Z machine have time and again validated the DFT simulations at extreme conditions and the combination of experiment and DFT provide reliable data for evaluating existing and constructing future wide-range equations of state models for molecular compounds like CO2 and polymers like PE, PMP, and GDP. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Machinability and modeling of cutting mechanism for Titanium Metal Matrix composites
NASA Astrophysics Data System (ADS)
Bejjani, Roland
Titanium Metal Matrix composites (TiMMC) is a new class of material. However, it is a very difficult to cut material. Therefore, the tool life is limited. In order to optimize the machining of TiMMC, three approaches (stages) were used. First, a TAGUCHI method for the design of experiments was used in order to identify the effects of the machining inputs (speed, feed, depth) to the output (cutting forces, surface roughness). To enhance even further the tool life, Laser Assisted Machining (LAM) was also experimented. In a second approach, and in order to better understand the cutting mechanism of TiMMC, the chip formation was analyzed and a new model for the adiabatic shear band in the chip segment was developed. In the last approach, and in order to have a better analysis tool to understand the cutting mechanism, a new constitutive model for TiMMC for simulation purposes was developed, with an added damage model. The FEM simulations results led to predictions of temperature, stress, strain, and damage, and can be used as an analysis tool and even for industrial applications. Following experimental work and analysis, I found that cutting TiMMC at higher speeds is more efficient and productive because it increases tool life. It was found that at higher speeds, fewer hard TiC particles are broken, resulting in reduced tool abrasion wear. In order to further optimize the machining of TiMMC, an unconventional machining method was used. In fact, Laser Assisted Machining (LAM) was used and was found to increase the tool life by approximately 180%. To understand the effects of the particles on the tool, micro scale observations of hard particles with SEM microscopy were performed and it was found that the tool/particle interaction while cutting can exist under three forms. The particles can either be cut at the surface, pushed inside the material, or even some of the pieces of the cut particles can be pushed inside the material. No particle de-bonding was observed. Some
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
ERIC Educational Resources Information Center
Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.
2009-01-01
This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…
(abstract) Modeling Protein Families and Human Genes: Hidden Markov Models and a Little Beyond
NASA Technical Reports Server (NTRS)
Baldi, Pierre
1994-01-01
We will first give a brief overview of Hidden Markov Models (HMMs) and their use in Computational Molecular Biology. In particular, we will describe a detailed application of HMMs to the G-Protein-Coupled-Receptor Superfamily. We will also describe a number of analytical results on HMMs that can be used in discrimination tests and database mining. We will then discuss the limitations of HMMs and some new directions of research. We will conclude with some recent results on the application of HMMs to human gene modeling and parsing.
ERIC Educational Resources Information Center
Henkes, Robert
1978-01-01
Abstract art provokes numerous interpretations, and as many misunderstandings. The adolescent reaction is no exception. The procedure described here can help the student to understand the abstract from at least one direction. (Author/RK)
Pope, Paul A; Ranken, Doug M
2010-01-01
A method for abstracting a 3D model by shrinking a triangular mesh, defined upon a best fitting ellipsoid surrounding the model, onto the model's surface has been previously described. This ''shrinkwrap'' process enables a semi-regular mesh to be defined upon an object's surface. This creates a useful data structure for conducting remote sensing simulations and image processing. However, using a best fitting ellipsoid having a graticule-based tessellation to seed the shrinkwrap process suffers from a mesh which is too dense at the poles. To achieve a more regular mesh, the use of a best fitting, subdivided icosahedron was tested. By subdividing each of the twenty facets of the icosahedron into regular triangles of a predetermined size, arbitrarily dense, highly-regular starting meshes can be created. Comparisons of the meshes resulting from these two seed surfaces are described. Use of a best fitting icosahedron-based mesh as the seed surface in the shrinkwrap process is preferable to using a best fitting ellipsoid. The impacts to remote sensing simulations, specifically generation of synthetic imagery, is illustrated.
On problems in defining abstract and metaphysical concepts--emergence of a new model.
Nahod, Bruno; Nahod, Perina Vukša
2014-12-01
Basic anthropological terminology is the first project covering terms from the domain of the social sciences under the Croatian Special Field Terminology program (Struna). Problems that have been sporadically noticed or whose existence could have been presumed during the processing of terms mainly from technical fields and sciences have finally emerged in "anthropology". The principles of the General Theory of Terminology (GTT), which are followed in Struna, were put to a truly exacting test, and sometimes stretched beyond their limits when applied to concepts that do not necessarily have references in the physical world; namely, abstract and metaphysical concepts. We are currently developing a new terminographical model based on Idealized Cognitive Models (ICM), which will hopefully ensure a better cross-filed implementation of various types of concepts and their relations. The goal of this paper is to introduce the theoretical bases of our model. Additionally, we will present a pilot study of the series of experiments in which we are trying to investigate the nature of conceptual categorization in special languages and its proposed difference form categorization in general language. PMID:25643547
A Framework for the Abstraction of Mesoscale Modeling for Weather Simulation
NASA Astrophysics Data System (ADS)
Limpasuvan, V.; Ujcich, B. E.
2009-12-01
Widely disseminated weather forecast results (e. g. from various national centers and private companies) are useful for typical users in gauging future atmospheric disturbances. However, these canonical forecasts may not adequately meet the needs of end-users in the various scientific fields since a predetermined model, as structured by the model administrator, produces these forecasts. To perform his/her own successful forecasts, a user faces a steep learning curve involving the collection of initial condition data (e.g. radar, satellite, and reanalyses) and operation of a suitable model (and associated software/computing). In this project, we develop an intermediate (prototypical) software framework and a web-based front-end interface that allow for the abstraction of an advanced weather model upon which the end-user can perform customizable forecasts and analyses. Having such an accessible, front-end interface for a weather model can benefit educational programs at the secondary school and undergraduate level, scientific research in the fields like fluid dynamics and meteorology, and the general public. In all cases, our project allows the user to generate a localized domain of choice, run the desired forecast on a remote high-performance computer cluster, and visually see the results. For instance, an undergraduate science curriculum could incorporate the resulting weather forecast performed under this project in laboratory exercises. Scientific researchers and graduate students would be able to readily adjust key prognostic variables in the simulation within this project’s framework. The general public within the contiguous United States could also run a simplified version of the project’s software with adjustments in forecast clarity (spatial resolution) and region size (domain). Special cases of general interests, in which a detailed forecast may be required, would be over areas of possible strong weather activities.
NASA Astrophysics Data System (ADS)
Laiho, Antti; Holopainen, Timo P.; Klinge, Paul; Arkkio, Antero
2007-05-01
In this work the effects of the electromechanical interaction on rotordynamics and vibration characteristics of cage rotor electrical machines were considered. An eccentric rotor motion distorts the electromagnetic field in the air-gap between the stator and rotor inducing a total force, the unbalanced magnetic pull, exerted on the rotor. In this paper a low-order parametric model for the unbalanced magnetic pull is coupled with a three-dimensional finite element structural model of the electrical machine. The main contribution of the work is to present a computationally efficient electromechanical model for vibration analysis of cage rotor machines. In this model, the interaction between the mechanical and electromagnetic systems is distributed over the air gap of the machine. This enables the inclusion of rotor and stator deflections into the analysis and, thus, yields more realistic prediction for the effects of electromechanical interaction. The model was tested by implementing it for two electrical machines with nominal speeds close to one of the rotor bending critical speeds. Rated machine data was used in order to predict the effects of the electromechanical interaction on vibration characteristics of the example machines.
Modelling of the dynamic behaviour of hard-to-machine alloys
NASA Astrophysics Data System (ADS)
Hokka, M.; Leemet, T.; Shrot, A.; Bäker, M.; Kuokkala, V.-T.
2012-08-01
Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.
Machine Learning Models for Detection of Regions of High Model Form Uncertainty in RANS
NASA Astrophysics Data System (ADS)
Ling, Julia; Templeton, Jeremy
2015-11-01
Reynolds Averaged Navier Stokes (RANS) models are widely used because of their computational efficiency and ease-of-implementation. However, because they rely on inexact turbulence closures, they suffer from significant model form uncertainty in many flows. Many RANS models make use of the Boussinesq hypothesis, which assumes a non-negative, scalar eddy viscosity that provides a linear relation between the Reynolds stresses and the mean strain rate. In many flows of engineering relevance, this eddy viscosity assumption is violated, leading to inaccuracies in the RANS predictions. For example, in near wall regions, the Boussinesq hypothesis fails to capture the correct Reynolds stress anisotropy. In regions of flow curvature, the linear relation between Reynolds stresses and mean strain rate may be inaccurate. This model form uncertainty cannot be quantified by simply varying the model parameters, as it is rooted in the model structure itself. Machine learning models were developed to detect regions of high model form uncertainty. These machine learning models consisted of binary classifiers that predicted, on a point-by-point basis, whether or not key RANS assumptions were violated. These classifiers were trained and evaluated for their sensitivity, specificity, and generalizability on a database of canonical flows.
Carrigan, B.
1980-06-01
Lower atmospheric modeling of air pollution from both mobile and stationary sources are covered in the bibliography. Models cover local diffusion, urban heat islands, precipitation washout, worldwide diffusion, climatology, and smog. Stratospheric modeling concerning supersonic aircraft are excluded. (This updated bibliography contains 130 abstracts, 88 of which are new entries to the previous edition.)
Atmospheric modeling of air pollution. 1977-78 (a bibliography with abstracts). Report for 1977-1978
Carrigan, B.
1980-06-01
Lower atmospheric modeling of air pollution from both mobile and stationary sources are covered in the bibliography. Models cover local diffusion, urban heat islands, precipitation washout, worldwide diffusion, climatology, and smog. Stratospheric modeling concerning supersonic aircraft are excluded. (This updated bibliography contains 216 abstracts, none of which are new entries to the previous edition.)
A stochastic model for the cell formation problem considering machine reliability
NASA Astrophysics Data System (ADS)
Esmailnezhad, Bahman; Fattahi, Parviz; Kheirkhah, Amir Saman
2015-03-01
This paper presents a new mathematical model to solve cell formation problem in cellular manufacturing systems, where inter-arrival time, processing time, and machine breakdown time are probabilistic. The objective function maximizes the number of operations of each part with more arrival rate within one cell. Because a queue behind each machine; queuing theory is used to formulate the model. To solve the model, two metaheurstic algorithms such as modified particle swarm optimization and genetic algorithm are proposed. For the generation of initial solutions in these algorithms, a new heuristic method is developed, which always creates feasible solutions. Both metaheurstic algorithms are compared against global solutions obtained from Lingo software's branch and bound (B&B). Also, a statistical method will be used for comparison of solutions of two metaheurstic algorithms. The results of numerical examples indicate that considering the machine breakdown has significant effect on block structures of machine-part matrixes.
What good are abstract and what-if models? Lessons from the Gaïa hypothesis.
Dutreuil, Sébastien
2014-08-01
This article on the epistemology of computational models stems from an analysis of the Gaïa hypothesis (GH). It begins with James Kirchner's criticisms of the central computational model of GH: Daisyworld. Among other things, the model has been criticized for being too abstract, describing fictional entities (fictive daisies on an imaginary planet) and trying to answer counterfactual (what-if) questions (how would a planet look like if life had no influence on it?). For these reasons the model has been considered not testable and therefore not legitimate in science, and in any case not very interesting since it explores non actual issues. This criticism implicitly assumes that science should only be involved in the making of models that are "actual" (by opposition to what-if) and "specific" (by opposition to abstract). I challenge both of these criticisms in this article. First by showing that although the testability-understood as the comparison of model output with empirical data-is an important procedure for explanatory models, there are plenty of models that are not testable. The fact that these are not testable (in this restricted sense) has nothing to do with their being "abstract" or "what-if" but with their being predictive models. Secondly, I argue that "abstract" and "what-if" models aim at (respectable) epistemic purposes distinct from those pursued by "actual and specific models". Abstract models are used to propose how-possibly explanation or to pursue theorizing. What-if models are used to attribute causal or explanatory power to a variable of interest. The fact that they aim at different epistemic goals entails that it may not be accurate to consider the choice between different kinds of model as a "strategy". PMID:25515262
Human factors model concerning the man-machine interface of mining crewstations
NASA Technical Reports Server (NTRS)
Rider, James P.; Unger, Richard L.
1989-01-01
The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.
Robust current control of AC machines using the internal model control method
Harnefors, L.; Nee, H.P.
1995-12-31
In the present paper, the internal model control (IMC) method is introduced and applied to ac machine current control. A permanent-magnet synchronous machine is used as an example. It is shown that the IMC design is straightforward and the resulting controller is simple to implement. The controller parameters are expressed in the machine parameters and the desired closed-loop rise time. The extra cost of implementation compared to PI control is negligible. It is further shown that IMC is able to outperform PI control with as well as without decoupling with respect to dq variable interaction in the presence of parameter deviations.
Including slot harmonics to mechanical model of two-pole induction machine with a force actuator
NASA Astrophysics Data System (ADS)
Sinervo, Anssi; Arkkio, Antero
2012-10-01
A simple mechanical model is identified for a two-pole induction machine that has a four-pole extra winding as a force actuator. The actuator can be used to suppress rotor vibrations. Forces affecting the rotor of the induction machine are separated into actuator force, purely mechanical force due to mass unbalance, and force caused by unbalanced magnetic pull from higher harmonics and unipolar flux. The force due to higher harmonics is embedded to the mechanical model. Parameters of the modified mechanical model are identified from measurements and the modifications are shown to be necessary. The force produced by the actuator is calculated using the mechanical model, direct flux measurements, and voltage and current of the force actuator. All three methods are shown to give matching results proving that the mechanical model can be used in vibration control. The test machine is shown to have time periodic behavior and discrete Fourier analysis is used to obtain time-invariant model parameters.
Kuwahara, Hiroyuki; Myers, Chris J.; Samoilov, Michael S.
2010-01-01
Uropathogenic Escherichia coli (UPEC) represent the predominant cause of urinary tract infections (UTIs). A key UPEC molecular virulence mechanism is type 1 fimbriae, whose expression is controlled by the orientation of an invertible chromosomal DNA element—the fim switch. Temperature has been shown to act as a major regulator of fim switching behavior and is overall an important indicator as well as functional feature of many urologic diseases, including UPEC host-pathogen interaction dynamics. Given this panoptic physiological role of temperature during UTI progression and notable empirical challenges to its direct in vivo studies, in silico modeling of corresponding biochemical and biophysical mechanisms essential to UPEC pathogenicity may significantly aid our understanding of the underlying disease processes. However, rigorous computational analysis of biological systems, such as fim switch temperature control circuit, has hereto presented a notoriously demanding problem due to both the substantial complexity of the gene regulatory networks involved as well as their often characteristically discrete and stochastic dynamics. To address these issues, we have developed an approach that enables automated multiscale abstraction of biological system descriptions based on reaction kinetics. Implemented as a computational tool, this method has allowed us to efficiently analyze the modular organization and behavior of the E. coli fimbriation switch circuit at different temperature settings, thus facilitating new insights into this mode of UPEC molecular virulence regulation. In particular, our results suggest that, with respect to its role in shutting down fimbriae expression, the primary function of FimB recombinase may be to effect a controlled down-regulation (rather than increase) of the ON-to-OFF fim switching rate via temperature-dependent suppression of competing dynamics mediated by recombinase FimE. Our computational analysis further implies that this down
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development. PMID:22894532
NASA Astrophysics Data System (ADS)
Zhou, Kan
With the modern trend of transportation electrification, electric machines are a key component of electric/hybrid electric vehicle (EV/HEV) powertrains. It is therefore important that vehicle powertrain-level and system-level designers and control engineers have access to accurate yet computationally-efficient (CE), physics-based modeling tools of the thermal and electromagnetic (EM) behavior of electric machines. In this dissertation, CE yet sufficiently-accurate thermal and EM models for electric machines, which are suitable for use in vehicle powertrain design, optimization, and control, are developed. This includes not only creating fast and accurate thermal and EM models for specific machine designs, but also the ability to quickly generate and determine the performance of new machine designs through the application of scaling techniques to existing designs. With the developed techniques, the thermal and EM performance can be accurately and efficiently estimated. Furthermore, powertrain or system designers can easily and quickly adjust the characteristics and the performance of the machine in ways that are favorable to the overall vehicle performance.
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
ERIC Educational Resources Information Center
Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan
2016-01-01
The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…
Modeling human-machine interactions for operations room layouts
NASA Astrophysics Data System (ADS)
Hendy, Keith C.; Edwards, Jack L.; Beevis, David
2000-11-01
The LOCATE layout analysis tool was used to analyze three preliminary configurations for the Integrated Command Environment (ICE) of a future USN platform. LOCATE develops a cost function reflecting the quality of all human-human and human-machine communications within a workspace. This proof- of-concept study showed little difference between the efficacy of the preliminary designs selected for comparison. This was thought to be due to the limitations of the study, which included the assumption of similar size for each layout and a lack of accurate measurement data for various objects in the designs, due largely to their notional nature. Based on these results, the USN offered an opportunity to conduct a LOCATE analysis using more appropriate assumptions. A standard crew was assumed, and subject matter experts agreed on the communications patterns for the analysis. Eight layouts were evaluated with the concepts of coordination and command factored into the analysis. Clear differences between the layouts emerged. The most promising design was refined further by the USN, and a working mock-up built for human-in-the-loop evaluation. LOCATE was applied to this configuration for comparison with the earlier analyses.
ERIC Educational Resources Information Center
Pietropola, Anne
1998-01-01
Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)
Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel
NASA Astrophysics Data System (ADS)
Outeiro, José C.; Umbrello, Domenico; Pina, José C.; Rizzuti, Stefania
2007-05-01
Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.
Modelling of Tool Wear and Residual Stress during Machining of AISI H13 Tool Steel
Outeiro, Jose C.; Pina, Jose C.; Umbrello, Domenico; Rizzuti, Stefania
2007-05-17
Residual stresses can enhance or impair the ability of a component to withstand loading conditions in service (fatigue, creep, stress corrosion cracking, etc.), depending on their nature: compressive or tensile, respectively. This poses enormous problems in structural assembly as this affects the structural integrity of the whole part. In addition, tool wear issues are of critical importance in manufacturing since these affect component quality, tool life and machining cost. Therefore, prediction and control of both tool wear and the residual stresses in machining are absolutely necessary. In this work, a two-dimensional Finite Element model using an implicit Lagrangian formulation with an automatic remeshing was applied to simulate the orthogonal cutting process of AISI H13 tool steel. To validate such model the predicted and experimentally measured chip geometry, cutting forces, temperatures, tool wear and residual stresses on the machined affected layers were compared. The proposed FE model allowed us to investigate the influence of tool geometry, cutting regime parameters and tool wear on residual stress distribution in the machined surface and subsurface of AISI H13 tool steel. The obtained results permit to conclude that in order to reduce the magnitude of surface residual stresses, the cutting speed should be increased, the uncut chip thickness (or feed) should be reduced and machining with honed tools having large cutting edge radii produce better results than chamfered tools. Moreover, increasing tool wear increases the magnitude of surface residual stresses.
Machine learning for many-body physics: The case of the Anderson impurity model
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Millis, Andrew J.
2014-10-01
Machine learning methods are applied to finding the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. The results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Numerically Controlled Machining Of Wind-Tunnel Models
NASA Technical Reports Server (NTRS)
Kovtun, John B.
1990-01-01
New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.
Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.; Schuchardt, Karen L.; Ngu, Anne Hee Hiong
2011-07-04
A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
Abdel-Aal, R E; Mangoud, A M
1996-09-01
The use of modern abductive machine learning techniques is described for modeling and predicting outcome parameters in terms of input parameters in medical survey data. The AIM (Abductory Induction Mechanism) abductive network machine-learning tool is used to model the educational score in a health survey of 2,720 Albanian primary school children. Data included the child's age, gender, vision, nourishment, parasite infection, family size, parents' education, and educational score. Models synthesized by training on just 100 cases predict the educational score output for the remaining 2,620 cases with 100% accuracy. Simple models represented as analytical functions highlight global relationships and trends in the survey population. Models generated are quite robust, with no change in the basic model structure for a 10-fold increase in the size of the training set. Compared to other statistical and neural network approaches, AIM provides faster and highly automated model synthesis, requiring little or no user intervention. PMID:8952313
Ekins, Sean; Freundlich, Joel S.; Reynolds, Robert C.
2013-01-01
The search for new tuberculosis treatments continues as we need to find molecules that can act more quickly, be accommodated in multi-drug regimens, and overcome ever increasing levels of drug resistance. Multiple large scale phenotypic high-throughput screens against Mycobacterium tuberculosis (Mtb) have generated dose response data, enabling the generation of machine learning models. These models also incorporated cytotoxicity data and were recently validated with a large external dataset. A cheminformatics data-fusion approach followed by Bayesian machine learning, Support Vector Machine or Recursive Partitioning model development (based on publicly available Mtb screening data) was used to compare individual datasets and subsequent combined models. A set of 1924 commercially available molecules with promising antitubercular activity (and lack of relative cytotoxicity to Vero cells) were used to evaluate the predictive nature of the models. We demonstrate that combining three datasets incorporating antitubercular and cytotoxicity data in Vero cells from our previous screens results in external validation receiver operator curve (ROC) of 0.83 (Bayesian or RP Forest). Models that do not have the highest five-fold cross validation ROC scores can outperform other models in a test set dependent manner. We demonstrate with predictions for a recently published set of Mtb leads from GlaxoSmithKline that no single machine learning model may be enough to identify compounds of interest. Dataset fusion represents a further useful strategy for machine learning construction as illustrated with Mtb. Coverage of chemistry and Mtb target spaces may also be limiting factors for the whole-cell screening data generated to date. PMID:24144044
NASA Astrophysics Data System (ADS)
Ulutan, Durul
2013-01-01
In the aerospace industry, titanium and nickel-based alloys are frequently used for critical structural components, especially due to their higher strength at both low and high temperatures, and higher wear and chemical degradation resistance. However, because of their unfavorable thermal properties, deformation and friction-induced microstructural changes prevent the end products from having good surface integrity properties. In addition to surface roughness, microhardness changes, and microstructural alterations, the machining-induced residual stress profiles of titanium and nickel-based alloys contribute in the surface integrity of these products. Therefore, it is essential to create a comprehensive method that predicts the residual stress outcomes of machining processes, and understand how machining parameters (cutting speed, uncut chip thickness, depth of cut, etc.) or tool parameters (tool rake angle, cutting edge radius, tool material/coating, etc.) affect the machining-induced residual stresses. Since experiments involve a certain amount of error in measurements, physics-based simulation experiments should also involve an uncertainty in the predicted values, and a rich set of simulation experiments are utilized to create expected value and variance for predictions. As the first part of this research, a method to determine the friction coefficients during machining from practical experiments was introduced. Using these friction coefficients, finite element-based simulation experiments were utilized to determine flow stress characteristics of materials and then to predict the machining-induced forces and residual stresses, and the results were validated using the experimental findings. A sensitivity analysis on the numerical parameters was conducted to understand the effect of changing physical and numerical parameters, increasing the confidence on the selected parameters, and the effect of machining parameters on machining-induced forces and residual
State Machine Modeling of the Space Launch System Solid Rocket Boosters
NASA Technical Reports Server (NTRS)
Harris, Joshua A.; Patterson-Hine, Ann
2013-01-01
The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.
Abstract State-Space Models for a Class of Linear Hyperbolic Systems of Balance Laws
NASA Astrophysics Data System (ADS)
Bartecki, Krzysztof
2015-12-01
The paper discusses and compares different abstract state-space representations for a class of linear hyperbolic systems defined on a one-dimensional spatial domain. It starts with their PDE representation in both weakly and strongly coupled forms. Next, the homogeneous state equation including the unbounded formal state operator is presented. Based on the semigroup approach, some results of well-posedness and internal stability are given. The boundary and observation operators are introduced, assuming a typical configuration of boundary inputs as well as pointwise observations of the state variables. Consequently, the homogeneous state equation is extended to the so-called boundary control state/signal form. Next, the classical additive statespace representation involving (A, B, C)-triple of state, input and output operators is considered. After short discussion on the appropriate Hilbert spaces, state-space equation in the so-called factor form is also presented. Finally, the resolvent of the system state operator A is discussed.
SAINT: A combined simulation language for modeling man-machine systems
NASA Technical Reports Server (NTRS)
Seifert, D. J.
1979-01-01
SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality a...
Multiscale Modeling and Analysis of an Ultra-Precision Damage Free Machining Method
NASA Astrophysics Data System (ADS)
Guan, Chaoliang; Peng, Wenqiang
2016-06-01
Under the condition of high laser flux, laser induced damage of optical element does not occur is the key to success of laser fusion ignition system. US government survey showed that the processing defects caused the laser induced damage threshold (LIDT) to decrease is one of the three major challenges. Cracks and scratches caused by brittle and plastic removal machining are fatal flaws. Using hydrodynamic effect polishing method can obtain damage free surface on quartz glass. The material removal mechanism of this typical ultra-precision machining process was modeled in multiscale. In atomic scale, chemical modeling illustrated the weakening and breaking of chemical bond energy. In particle scale, micro contact modeling given the elastic remove mode boundary of materials. In slurry scale, hydrodynamic flow modeling showed the dynamic pressure and shear stress distribution which are relations with machining effect. Experiment was conducted on a numerically controlled system, and one quartz glass optical component was polished in the elastic mode. Results show that the damages are removed away layer by layer as the removal depth increases due to the high damage free machining ability of the HEP. And the LIDT of sample was greatly improved.
Experience with abstract notation one
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.
The Sausage Machine: A New Two-Stage Parsing Model.
ERIC Educational Resources Information Center
Frazier, Lyn; Fodor, Janet Dean
1978-01-01
The human sentence parsing device assigns phrase structure to sentences in two steps. The first stage parser assigns lexical and phrasal nodes to substrings of words. The second stage parser then adds higher nodes to link these phrasal packages together into a complete phrase marker. This model is compared with others. (Author/RD)
Experiments with encapsulation of Monte Carlo simulation results in machine learning models
NASA Astrophysics Data System (ADS)
Lal Shrestha, Durga; Kayastha, Nagendra; Solomatine, Dimitri
2010-05-01
Uncertainty analysis techniques based on Monte Carlo (MC) simulation have been applied in hydrological sciences successfully in the last decades. They allow for quantification of the model output uncertainty resulting from uncertain model parameters, input data or model structure. They are very flexible, conceptually simple and straightforward, but become impractical in real time applications for complex models when there is little time to perform the uncertainty analysis because of the large number of model runs required. A number of new methods were developed to improve the efficiency of Monte Carlo methods and still these methods require considerable number of model runs in both offline and operational mode to produce reliable and meaningful uncertainty estimation. This paper presents experiments with machine learning techniques used to encapsulate the results of MC runs. A version of MC simulation method, the generalised likelihood uncertain estimation (GLUE) method, is first used to assess the parameter uncertainty of the conceptual rainfall-runoff model HBV. Then the three machines learning methods, namely artificial neural networks, M5 model trees and locally weighted regression methods are trained to encapsulate the uncertainty estimated by the GLUE method using the historical input data. The trained machine learning models are then employed to predict the uncertainty of the model output for the new input data. This method has been applied to two contrasting catchments: the Brue catchment (United Kingdom) and the Bagamati catchment (Nepal). The experimental results demonstrate that the machine learning methods are reasonably accurate in approximating the uncertainty estimated by GLUE. The great advantage of the proposed method is its efficiency to reproduce the MC based simulation results; it can thus be an effective tool to assess the uncertainty of flood forecasting in real time.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.
Dynamic modelling and analysis of multi-machine power systems including wind farms
NASA Astrophysics Data System (ADS)
Tabesh, Ahmadreza
2005-11-01
This thesis introduces a small-signal dynamic model, based on a frequency response approach, for the analysis of a multi-machine power system with special focus on an induction machine based wind farm. The proposed approach is an alternative method to the conventional eigenvalue analysis method which is widely employed for small-signal dynamic analyses of power systems. The proposed modelling approach is successfully applied and evaluated for a power system that (i) includes multiple synchronous generators, and (ii) a wind farm based on either fixed-speed, variable-speed, or doubly-fed induction machine based wind energy conversion units. The salient features of the proposed method, as compared with the conventional eigenvalue analysis method, are: (i) computational efficiency since the proposed method utilizes the open-loop transfer-function matrix of the system, (ii) performance indices that are obtainable based on frequency response data and quantitatively describe the dynamic behavior of the system, and (iii) capability to formulate various wind energy conversion unit, within a wind farm, in a modular form. The developed small-signal dynamic model is applied to a set of multi-machine study systems and the results are validated based on comparison (i) with digital time-domain simulation results obtained from PSCAD/EMTDC software tool, and (ii) where applicable with eigenvalue analysis results.
Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment.
Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe
2016-01-01
Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681
Ghosts in the Machine. Interoceptive Modeling for Chronic Pain Treatment
Di Lernia, Daniele; Serino, Silvia; Cipresso, Pietro; Riva, Giuseppe
2016-01-01
Pain is a complex and multidimensional perception, embodied in our daily experiences through interoceptive appraisal processes. The article reviews the recent literature about interoception along with predictive coding theories and tries to explain a missing link between the sense of the physiological condition of the entire body and the perception of pain in chronic conditions, which are characterized by interoceptive deficits. Understanding chronic pain from an interoceptive point of view allows us to better comprehend the multidimensional nature of this specific organic information, integrating the input of several sources from Gifford's Mature Organism Model to Melzack's neuromatrix. The article proposes the concept of residual interoceptive images (ghosts), to explain the diffuse multilevel nature of chronic pain perceptions. Lastly, we introduce a treatment concept, forged upon the possibility to modify the interoceptive chronic representation of pain through external input in a process that we call interoceptive modeling, with the ultimate goal of reducing pain in chronic subjects. PMID:27445681
Not Available
1993-05-01
The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)
ShrinkWrap: 3D model abstraction for remote sensing simulation
Pope, Paul A
2009-01-01
Remote sensing simulations often require the use of 3D models of objects of interest. There are a multitude of these models available from various commercial sources. There are image processing, computational, database storage, and . data access advantages to having a regularized, encapsulating, triangular mesh representing the surface of a 3D object model. However, this is usually not how these models are stored. They can have too much detail in some areas, and not enough detail in others. They can have a mix of planar geometric primitives (triangles, quadrilaterals, n-sided polygons) representing not only the surface of the model, but also interior features. And the exterior mesh is usually not regularized nor encapsulating. This paper presents a method called SHRlNKWRAP which can be used to process 3D object models to achieve output models having the aforementioned desirable traits. The method works by collapsing an encapsulating sphere, which has a regularized triangular mesh on its surface, onto the surface of the model. A GUI has been developed to make it easy to leverage this capability. The SHRlNKWRAP processing chain and use of the GUI are described and illustrated.
Not Available
1993-07-01
The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
NASA Astrophysics Data System (ADS)
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv
2007-04-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, Jose C.; Shivpuri, Rajiv
2007-04-07
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F
2014-06-01
To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). PMID:24007752
NASA Astrophysics Data System (ADS)
Zhu, Limin; He, Gaiyun; Song, Zhanjie
2016-03-01
Product variation reduction is critical to improve process efficiency and product quality, especially for multistage machining process (MMP). However, due to the variation accumulation and propagation, it becomes quite difficult to predict and reduce product variation for MMP. While the method of statistical process control can be used to control product quality, it is used mainly to monitor the process change rather than to analyze the cause of product variation. In this paper, based on a differential description of the contact kinematics of locators and part surfaces, and the geometric constraints equation defined by the locating scheme, an improved analytical variation propagation model for MMP is presented. In which the influence of both locator position and machining error on part quality is considered while, in traditional model, it usually focuses on datum error and fixture error. Coordinate transformation theory is used to reflect the generation and transmission laws of error in the establishment of the model. The concept of deviation matrix is heavily applied to establish an explicit mapping between the geometric deviation of part and the process error sources. In each machining stage, the part deviation is formulized as three separated components corresponding to three different kinds of error sources, which can be further applied to fault identification and design optimization for complicated machining process. An example part for MMP is given out to validate the effectiveness of the methodology. The experiment results show that the model prediction and the actual measurement match well. This paper provides a method to predict part deviation under the influence of fixture error, datum error and machining error, and it enriches the way of quality prediction for MMP.
Analytical modeling of a new disc permanent magnet linear synchronous machine for electric vehicles
Liu, C.T.; Chen, J.W.; Su, K.S.
1999-09-01
This paper develops an analytical approach based on a qd0 reference frame model to analyze dynamic and steady state characteristics of disc permanent magnet linear synchronous machines (DPMLSMs). The established compact mathematical model can be more easily employed to analyze the system behavior and to design the controller. Superiority in operational electromagnetic characteristics of the proposed DPMLSM for electric vehicle (EV) applications is verified by both numerical simulations and experimental investigations.
Product Model for Integrated Machining and Inspection Process Planning
NASA Astrophysics Data System (ADS)
Gutiérrez Rubert, S.; Bruscas Bellido, G. M.; Rosado Castellano, P.; Romero Subirón, F.
2009-11-01
In the product-process development closed-loop an integrated product and process plan model is essential for structuring and interchanging data and information. Many of the currently existing standards (STEP) provide an appropriate solution for the different stages of the closed-loop using a clear feature-based approach. However, inspection planning is not undertaken in the same manner and detailed inspection (measurement) planning is performed directly. In order to carry out inspection planning, that is both integrated and at the same level as process planning, the Inspection Feature (InspF) is proposed here, which is directly related with product and process functionality. The proposal includes an InspF library that makes it possible part interpretation from an inspection point of view, while also providing alternatives and not being restricted to the use of just one single type of measurement equipment.
Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard
2015-09-02
This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared to finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2011-12-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
NASA Astrophysics Data System (ADS)
Zabbah, Iman
2012-01-01
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.
A. Hassan; H. Bekhit; Y. Zhang; J. Chapman
2008-09-15
Uncertainty built into conceptual groundwater flow and transport models and associated parametric uncertainty should be appropriately included when such models are used to develop detection monitoring networks for contaminated sites. We compare alternative approaches of propagating such uncertainty from the flow and transport model into the network design. The focus is on detection monitoring networks where the primary objective is to intercept the contaminant before it reaches a boundary of interest (e.g., compliance boundary). Different uncertainty propagation approaches identify different well locations and different well combinations (networks) as having the highest detection efficiency. It is thus recommended that multiple uncertainty propagation approaches are considered. If several approaches yield consistent results in terms of identifying the best performing candidate wells and the best performing well network for detecting a contaminant plume, this would provide confidence in the suitability of the selected well locations.
Using Machine Learning to Create Turbine Performance Models (Presentation)
Clifton, A.
2013-04-01
Wind turbine power output is known to be a strong function of wind speed, but is also affected by turbulence and shear. In this work, new aerostructural simulations of a generic 1.5 MW turbine are used to explore atmospheric influences on power output. Most significant is the hub height wind speed, followed by hub height turbulence intensity and then wind speed shear across the rotor disk. These simulation data are used to train regression trees that predict the turbine response for any combination of wind speed, turbulence intensity, and wind shear that might be expected at a turbine site. For a randomly selected atmospheric condition, the accuracy of the regression tree power predictions is three times higher than that of the traditional power curve methodology. The regression tree method can also be applied to turbine test data and used to predict turbine performance at a new site. No new data is required in comparison to the data that are usually collected for a wind resource assessment. Implementing the method requires turbine manufacturers to create a turbine regression tree model from test site data. Such an approach could significantly reduce bias in power predictions that arise because of different turbulence and shear at the new site, compared to the test site.
A Simple Computational Model of a jellyfish-like flying machine
NASA Astrophysics Data System (ADS)
Fang, Fang; Ristroph, Leif; Shelley, Michael
2013-11-01
We explore theoretically the aerodynamics of a jellyfish-like flying machine recently fabricated at NYU. This experimental device achieves flight and hovering by opening and closing a set of flapping wings. It displays orientational flight stability without additional control surfaces or feedback control. Our model machine consists of two symmetric massless flapping wings connected to a body with mass and moment of inertia. A vortex sheet shedding and wake model is used for the flow simulation. Use of the Fast Multipole Method (FMM), and adaptive addition/deletion of vortices, allows us to simulate for long times and resolve complex wakes. We use our model to explore the physical parameters that maintain body hovering, its ascent and descent, and investigate the stability of these states.
Sboner, Andrea; Aliferis, Constantin F
2005-01-01
We explore several machine learning techniques to model clinical decision making of 6 dermatologists in the clinical task of melanoma diagnosis of 177 pigmented skin lesions (76 malignant, 101 benign). In particular we apply Support Vector Machine (SVM) classifiers to model clinician judgments, Markov Blanket and SVM feature selection to eliminate clinical features that are effectively ignored by the dermatologists, and a novel explanation technique whereby regression tree induction is run on the reduced SVM model's output to explain the physicians' implicit patterns of decision making. Our main findings include: (a) clinician judgments can be accurately predicted, (b) subtle decision making rules are revealed enabling the explanation of differences of opinion among physicians, and (c) physician judgment is non-compliant with the diagnostic guidelines that physicians self-report as guiding their decision making. PMID:16779123
ERIC Educational Resources Information Center
North Carolina State Univ., Raleigh. Academy for Community Coll. Leadership Advancement, Innovation, and Modeling.
The Academy for Community College Leadership, Innovation, and Modeling (ACCLAIM) is a 3-year pilot project funded by the W. K. Kellogg Foundation, North Carolina State University (NCSU), and the community college systems of Maryland, Virginia, South Carolina, and North Carolina. ACCLAIM's purpose is to help the region's community colleges assume a…
Law machines: scale models, forensic materiality and the making of modern patent law.
Pottage, Alain
2011-10-01
Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property. PMID:22164718
Gortais, Bernard
2003-01-01
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Hicklen, Michael L.
2007-04-01
This paper describes a novel capability for modeling known idea propagation transformations and predicting responses to new ideas from geopolitical groups. Ideas are captured using semantic words that are text based and bear cognitive definitions. We demonstrate a unique algorithm for converting these into analytical predictive equations. Using the illustrative idea of "proposing a gasoline price increase of 1 per gallon from 2" and its changing perceived impact throughout 5 demographic groups, we identify 13 cost of living Diplomatic, Information, Military, and Economic (DIME) features common across all 5 demographic groups. This enables the modeling and monitoring of Political, Military, Economic, Social, Information, and Infrastructure (PMESII) effects of each group to this idea and how their "perception" of this proposal changes. Our algorithm and results are summarized in this paper.
Not Available
1994-01-01
The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 62 citations and includes a subject term index and title list.)
Not Available
1992-11-01
The bibliography contains citations concerning the nature and occurrence of groundwater in fractured crystalline and sedimentary rocks. Techniques for determining connectivity and hydraulic conductivity, pollutant distribution in fractures, and site studies in specific geologic environments are among the topics discussed. Citations pertaining to modeling studies of fractured rock hydrogeology are addressed in a separate bibliography. (Contains a minimum of 54 citations and includes a subject term index and title list.)
A paradigm for data-driven predictive modeling using field inversion and machine learning
NASA Astrophysics Data System (ADS)
Parish, Eric J.; Duraisamy, Karthik
2016-01-01
We propose a modeling paradigm, termed field inversion and machine learning (FIML), that seeks to comprehensively harness data from sources such as high-fidelity simulations and experiments to aid the creation of improved closure models for computational physics applications. In contrast to inferring model parameters, this work uses inverse modeling to obtain corrective, spatially distributed functional terms, offering a route to directly address model-form errors. Once the inference has been performed over a number of problems that are representative of the deficient physics in the closure model, machine learning techniques are used to reconstruct the model corrections in terms of variables that appear in the closure model. These reconstructed functional forms are then used to augment the closure model in a predictive computational setting. As a first demonstrative example, a scalar ordinary differential equation is considered, wherein the model equation has missing and deficient terms. Following this, the methodology is extended to the prediction of turbulent channel flow. In both of these applications, the approach is demonstrated to be able to successfully reconstruct functional corrections and yield accurate predictive solutions while providing a measure of model form uncertainties.
Machine Learning Techniques for Combining Multi-Model Climate Projections (Invited)
NASA Astrophysics Data System (ADS)
Monteleoni, C.
2013-12-01
The threat of climate change is one of the greatest challenges currently facing society. Given the profound impact machine learning has made on the natural sciences to which it has been applied, such as the field of bioinformatics, machine learning is poised to accelerate discovery in climate science. Recent advances in the fledgling field of climate informatics have demonstrated the promise of machine learning techniques for problems in climate science. A key problem in climate science is how to combine the projections of the multi-model ensemble of global climate models that inform the Intergovernmental Panel on Climate Change (IPCC). I will present three approaches to this problem. Our Tracking Climate Models (TCM) work demonstrated the promise of an algorithm for online learning with expert advice, for this task. Given temperature projections and hindcasts from 20 IPCC global climate models, and over 100 years of historical temperature data, TCM generated predictions that tracked the changing sequence of which model currently predicts best. On historical data, at both annual and monthly time-scales, and in future simulations, TCM consistently outperformed the average over climate models, the existing benchmark in climate science, at both global and continental scales. We then extended TCM to take into account climate model projections at higher spatial resolutions, and to model geospatial neighborhood influence between regions. Our second algorithm enables neighborhood influence by modifying the transition dynamics of the Hidden Markov Model from which TCM is derived, allowing the performance of spatial neighbors to influence the temporal switching probabilities for the best climate model at a given location. We recently applied a third technique, sparse matrix completion, in which we create a sparse (incomplete) matrix from climate model projections/hindcasts and observed temperature data, and apply a matrix completion algorithm to recover it, yielding
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates. PMID:26736127
Turner, D.R.; Pabalan, R.T. )
1999-01-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
Turner, D.R.; Pabalan, R.T.
1999-11-01
Sorption onto minerals in the geologic setting may help to mitigate potential radionuclide transport from the proposed high-level radioactive waste repository at Yucca Mountain (YM), Nevada. An approach is developed for including aspects of more mechanistic sorption models into current probabilistic performance assessment (PA) calculations. Data on water chemistry from the vicinity of YM are screened and used to calculate the ranges in parameters that could exert control on radionuclide corruption behavior. Using a diffuse-layer surface complexation model, sorption parameters for Np(V) and U(VI) are calculated based on the chemistry of each water sample. Model results suggest that log normal probability distribution functions (PDFs) of sorption parameters are appropriate for most of the samples, but the calculated range is almost five orders of magnitude for Np(V) sorption and nine orders of magnitude for U(VI) sorption. Calculated sorption parameters may also vary at a single sample location by almost a factor of 10 over time periods of the order of days to years due to changes in chemistry, although sampling and analytical methodologies may introduce artifacts that add uncertainty to the evaluation of these fluctuations. Finally, correlation coefficients between the calculated Np(V) and U(VI) sorption parameters can be included as input into PA sampling routines, so that the value selected for one radionuclide sorption parameter is conditioned by its statistical relationship to the others. The approaches outlined here can be adapted readily to current PA efforts, using site-specific information to provide geochemical constraints on PDFs for radionuclide transport parameters.
NASA Astrophysics Data System (ADS)
Proykova, Ana
2009-04-01
Essential contributions have been made in the field of finite-size systems of ingredients interacting with potentials of various ranges. Theoretical simulations have revealed peculiar size effects on stability, ground state structure, phases, and phase transformation of systems confined in space and time. Models developed in the field of pure physics (atomic and molecular clusters) have been extended and successfully transferred to finite-size systems that seem very different—small-scale financial markets, autoimmune reactions, and social group reactions to advertisements. The models show that small-scale markets diverge unexpectedly fast as a result of small fluctuations; autoimmune reactions are sequences of two discontinuous phase transitions; and social groups possess critical behavior (social percolation) under the influence of an external field (advertisement). Some predicted size-dependent properties have been experimentally observed. These findings lead to the hypothesis that restrictions on an object's size determine the object's total internal (configuration) and external (environmental) interactions. Since phases are emergent phenomena produced by self-organization of a large number of particles, the occurrence of a phase in a system containing a small number of ingredients is remarkable.
A study of sound transmission in an abstract middle ear using physical and finite element models.
Gonzalez-Herrera, Antonio; Olson, Elizabeth S
2015-11-01
The classical picture of middle ear (ME) transmission has the tympanic membrane (TM) as a piston and the ME cavity as a vacuum. In reality, the TM moves in a complex multiphasic pattern and substantial pressure is radiated into the ME cavity by the motion of the TM. This study explores ME transmission with a simple model, using a tube terminated with a plastic membrane. Membrane motion was measured with a laser interferometer and pressure on both sides of the membrane with micro-sensors that could be positioned close to the membrane without disturbance. A finite element model of the system explored the experimental results. Both experimental and theoretical results show resonances that are in some cases primarily acoustical or mechanical and sometimes produced by coupled acousto-mechanics. The largest membrane motions were a result of the membrane's mechanical resonances. At these resonant frequencies, sound transmission through the system was larger with the membrane in place than it was when the membrane was absent. PMID:26627771
Yakubova, Gulnoza; Hughes, Elizabeth M; Shinaberry, Megan
2016-07-01
The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the effectiveness of the intervention on the acquisition and maintenance of addition, subtraction, and number comparison skills for four elementary school students with ASD. Findings supported the effectiveness of the intervention in improving skill acquisition and maintenance at a 3-week follow-up. Implications for practice and future research are discussed. PMID:26983919
Kinematic modeling and verification of an articulated arm coordinate measuring machine
NASA Astrophysics Data System (ADS)
Zhang, Huaishan; Gao, Guanbin; Wang, Wen; Na, Jing; Wu, Xing
2016-01-01
The articulated arm coordinate measuring machine (AACMM) is a new type of non-orthogonal coordinate measuring machine (CMM). Unlike the traditional orthogonal CMM which has three linear guides the AACMM is composed of a series of linkages connected by rotating joints. Firstly, the coordinate systems of the AACMM are established according to D-H method, the homogeneous transformation matrixes from the probe to the base of the AACMM are derived. And the graphic simulation system of the AACMM is built in Matlab, which verify the magnitude and direction of the joint angles qualitatively. Then, the data acquisition software of the AACMM is compiled by Visual C++, and there is a statistical analysis on the calculated measuring coordinates and actual coordinates, which indicates that the kinematic model of the AACMM is correct. The kinematic model provides a basis for measurement, calibration and error compensation of the AACMM.
Curran, H J; Pitz, W J; Westbrook, C K; Griffiths, J F; Mohamed, C
2000-11-01
A computer model is used to examine oxidation of hydrocarbon fuels in a rapid compression machine. For one of the fuels studied, n-heptane, significant fuel consumption is computed to take place during the compression stroke under some operating conditions, while for the less reactive n-pentane, no appreciable fuel consumption occurs until after the end of compression. The third fuel studied, a 60 PRF mixture of iso-octane and n-heptane, exhibits behavior that is intermediate between that of n-heptane and n-pentane. The model results indicate that computational studies of rapid compression machine ignition must consider fuel reaction during compression in order to achieve satisfactory agreement between computed and experimental results.
Tang, Y.; Kline, J.A. Sr.
1996-12-01
Nonlinear boundary element analysis provides a more accurate and detailing tool for the design of switched reluctance machines, than the conventional equivalent-circuit methods. Design optimization through more detailed analysis and simulation can reduce development and prototyping costs and time to market. Firstly, magnetic field modeling of an industrial switched reluctance machine by boundary element method is reported in this paper. Secondly, performance prediction and dynamic simulation of motor and control design are presented. Thirdly, magnetic forces that cause noise and vibration are studied, to include the effects of motor and control design variations on noise in the design process. Testing of the motor in NEMA 215-Frame size is carried out to verify the accuracy of modeling and simulation.
ERGONOMICS ABSTRACTS 48347-48982.
ERIC Educational Resources Information Center
Ministry of Technology, London (England). Warren Spring Lab.
IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…
A mathematical model of the controlled axial flow divider for mobile machines
NASA Astrophysics Data System (ADS)
Mulyukin, V. L.; Karelin, D. L.; Belousov, A. M.
2016-06-01
The authors give a mathematical model of the axial adjustable flow divider allowing one to define the parameters of the feed pump and the hydraulic motor-wheels in the multi-circuit hydrostatic transmission of mobile machines, as well as for example built features that allows to clearly evaluate the mutual influence of the values of pressure and flow on all input and output circuits of the system.
NASA Astrophysics Data System (ADS)
Lv, Jie; Yan, Zhenguo; Wei, Jingyi
2014-11-01
Accurate retrieval of crop chlorophyll content is of great importance for crop growth monitoring, crop stress situations, and the crop yield estimation. This study focused on retrieval of rice chlorophyll content from data through radiative transfer model inversion. A field campaign was carried out in September 2009 in the farmland of ChangChun, Jinlin province, China. A different set of 10 sites of the same species were used in 2009 for validation of methodologies. Reflectance of rice was collected using ASD field spectrometer for the solar reflective wavelengths (350-2500 nm), chlorophyll content of rice was measured by SPAD-502 chlorophyll meter. Each sample sites was recorded with a Global Position System (GPS).Firstly, the PROSPECT radiative transfer model was inverted using support vector machine in order to link rice spectrum and the corresponding chlorophyll content. Secondly, genetic algorithms were adopted to select parameters of support vector machine, then support vector machine was trained the training data set, in order to establish leaf chlorophyll content estimation model. Thirdly, a validation data set was established based on hyperspectral data, and the leaf chlorophyll content estimation model was applied to the validation data set to estimate leaf chlorophyll content of rice in the research area. Finally, the outcome of the inversion was evaluated using the calculated R2 and RMSE values with the field measurements. The results of the study highlight the significance of support vector machine in estimating leaf chlorophyll content of rice. Future research will concentrated on the view of the definition of satellite images and the selection of the best measurement configuration for accurate estimation of rice characteristics.
Quantitative chemogenomics: machine-learning models of protein-ligand interaction.
Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena
2011-01-01
Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. PMID:21470169
RMP model based optimization of power system stabilizers in multi-machine power system.
Baek, Seung-Mook; Park, Jung-Wook
2009-01-01
This paper describes the nonlinear parameter optimization of power system stabilizer (PSS) by using the reduced multivariate polynomial (RMP) algorithm with the one-shot property. The RMP model estimates the second-order partial derivatives of the Hessian matrix after identifying the trajectory sensitivities, which can be computed from the hybrid system modeling with a set of differential-algebraic-impulsive-switched (DAIS) structure for a power system. Then, any nonlinear controller in the power system can be optimized by achieving a desired performance measure, mathematically represented by an objective function (OF). In this paper, the output saturation limiter of the PSS, which is used to improve low-frequency oscillation damping performance during a large disturbance, is optimally tuned exploiting the Hessian estimated by the RMP model. Its performances are evaluated with several case studies on both single-machine infinite bus (SMIB) and multi-machine power system (MMPS) by time-domain simulation. In particular, all nonlinear parameters of multiple PSSs on IEEE benchmark two-area four-machine power system are optimized to be robust against various disturbances by using the weighted sum of the OFs. PMID:19596547
A model of unsteady spatially inhomogeneous flow in a radial-axial blade machine
NASA Astrophysics Data System (ADS)
Ambrozhevich, A. V.; Munshtukov, D. A.
A two-dimensional model of the gasdynamic process in a radial-axial blade machine is proposed which allows for the instantaneous local state of the field of flow parameters, changes in the set angles along the median profile line, profile losses, and centrifugal and Coriolis forces. The model also allows for the injection of cooling air and completion of fuel combustion in the flow. The model is equally applicable to turbines and compressors. The use of the method of singularities provides for a unified and relatively simple description of various factors affecting the flow and, therefore, for computational efficiency.
Extreme learning machine based spatiotemporal modeling of lithium-ion battery thermal dynamics
NASA Astrophysics Data System (ADS)
Liu, Zhen; Li, Han-Xiong
2015-03-01
Due to the overwhelming complexity of the electrochemical related behaviors and internal structure of lithium ion batteries, it is difficult to obtain an accurate mathematical expression of their thermal dynamics based on the physical principal. In this paper, a data based thermal model which is suitable for online temperature distribution estimation is proposed for lithium-ion batteries. Based on the physics based model, a simple but effective low order model is obtained using the Karhunen-Loeve decomposition method. The corresponding uncertain chemical related heat generation term in the low order model is approximated using extreme learning machine. All uncertain parameters in the low order model can be determined analytically in a linear way. Finally, the temperature distribution of the whole battery can be estimated in real time based on the identified low order model. Simulation results demonstrate the effectiveness of the proposed model. The simple training process of the model makes it superior for onboard application.
NASA Astrophysics Data System (ADS)
Kayastha, Nagendra; Solomatine, Dimitri; Lal Shrestha, Durga; van Griensven, Ann
2013-04-01
In recent years, a lot of attention in the hydrologic literature is given to model parameter uncertainty analysis. The robustness estimation of uncertainty depends on the efficiency of sampling method used to generate the best fit responses (outputs) and on ease of use. This paper aims to investigate: (1) how sampling strategies effect the uncertainty estimations of hydrological models, (2) how to use this information in machine learning predictors of models uncertainty. Sampling of parameters may employ various algorithms. We compared seven different algorithms namely, Monte Carlo (MC) simulation, generalized likelihood uncertainty estimation (GLUE), Markov chain Monte Carlo (MCMC), shuffled complex evolution metropolis algorithm (SCEMUA), differential evolution adaptive metropolis (DREAM), partical swarm optimization (PSO) and adaptive cluster covering (ACCO) [1]. These methods were applied to estimate uncertainty of streamflow simulation using conceptual model HBV and Semi-distributed hydrological model SWAT. Nzoia catchment in West Kenya is considered as the case study. The results are compared and analysed based on the shape of the posterior distribution of parameters, uncertainty results on model outputs. The MLUE method [2] uses results of Monte Carlo sampling (or any other sampling shceme) to build a machine learning (regression) model U able to predict uncertainty (quantiles of pdf) of a hydrological model H outputs. Inputs to these models are specially identified representative variables (past events precipitation and flows). The trained machine learning models are then employed to predict the model output uncertainty which is specific for the new input data. The problem here is that different sampling algorithms result in different data sets used to train such a model U, which leads to several models (and there is no clear evidence which model is the best since there is no basis for comparison). A solution could be to form a committee of all models U and
Bayesian reliability modeling and assessment solution for NC machine tools under small-sample data
NASA Astrophysics Data System (ADS)
Yang, Zhaojun; Kan, Yingnan; Chen, Fei; Xu, Binbin; Chen, Chuanhai; Yang, Chuangui
2015-11-01
Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters' prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters' posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.
Experimental study on light induced influence model to mice using support vector machine
NASA Astrophysics Data System (ADS)
Ji, Lei; Zhao, Zhimin; Yu, Yinshan; Zhu, Xingyue
2014-08-01
Previous researchers have made studies on different influences created by light irradiation to animals, including retinal damage, changes of inner index and so on. However, the model of light induced damage to animals using physiological indicators as features in machine learning method is never founded. This study was designed to evaluate the changes in micro vascular diameter, the serum absorption spectrum and the blood flow influenced by light irradiation of different wavelengths, powers and exposure time with support vector machine (SVM). The micro images of the mice auricle were recorded and the vessel diameters were calculated by computer program. The serum absorption spectrums were analyzed. The result shows that training sample rate 20% and 50% have almost the same correct recognition rate. Better performance and accuracy was achieved by third-order polynomial kernel SVM quadratic optimization method and it worked suitably for predicting the light induced damage to organisms.
An application of three-dimensional modeling in the cutting machine of intersecting line software
NASA Astrophysics Data System (ADS)
Lu, Jixiang
2011-11-01
This paper developed a software platform of intersecting line cutting machine. The software platform consists of three parts. The first is the interface of parameter input and modify, the second is the three-dimensional display of main tube and branch tube, and the last is the cutting simulation and G code output. We can obtain intersection data by intersection algorithm, and we also make three-dimensional model and dynamic simulation on the data of intersecting line cutting. By changing the parameters and the assembly sequence of main tube and branch tube, you can see the modified two-dimensional and three-dimensional graphics and corresponding G-code output file. This method has been applied to practical cutting machine of intersecting line software.
Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris
2015-04-01
The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates. PMID:25874406
Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S
2016-01-01
Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of
Mehra, Lucky K.; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S.
2016-01-01
Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of
NASA Astrophysics Data System (ADS)
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Mathematical concepts for modeling human behavior in complex man-machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.; Rouse, W. B.
1979-01-01
Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.
Fast and accurate modeling of molecular atomization energies with machine learning.
Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole
2012-02-01
We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves. PMID:22400967
A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping
Yan, Wang; Jiajin, Le; Yun, Zhang
2014-01-01
The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372
NASA Astrophysics Data System (ADS)
Matasci, G.; Pozdnoukhov, A.; Kanevski, M.
2009-04-01
The recent progress in environmental monitoring technologies allows capturing extensive amount of data that can be used to assist in avalanche forecasting. While it is not straightforward to directly obtain the stability factors with the available technologies, the snow-pack profiles and especially meteorological parameters are becoming more and more available at finer spatial and temporal scales. Being very useful for improving physical modelling, these data are also of particular interest regarding their use involving the contemporary data-driven techniques of machine learning. Such, the use of support vector machine classifier opens ways to discriminate the ``safe'' and ``dangerous'' conditions in the feature space of factors related to avalanche activity based on historical observations. The input space of factors is constructed from the number of direct and indirect snowpack and weather observations pre-processed with heuristic and physical models into a high-dimensional spatially varying vector of input parameters. The particular system presented in this work is implemented for the avalanche-prone site of Ben Nevis, Lochaber region in Scotland. A data-driven model for spatio-temporal avalanche danger forecasting provides an avalanche danger map for this local (5x5 km) region at the resolution of 10m based on weather and avalanche observations made by forecasters on a daily basis at the site. We present the further work aimed at overcoming the ``black-box'' type modelling, a disadvantage the machine learning methods are often criticized for. It explores what the data-driven method of support vector machine has to offer to improve the interpretability of the forecast, uncovers the properties of the developed system with respect to highlighting which are the important features that led to the particular prediction (both in time and space), and presents the analysis of sensitivity of the prediction with respect to the varying input parameters. The purpose of the
A multianalyzer machine learning model for marine heterogeneous data schema mapping.
Yan, Wang; Jiajin, Le; Yun, Zhang
2014-01-01
The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results' evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer's obvious improvement of mapping error rate. PMID:25250372
A hybrid prognostic model for multistep ahead prediction of machine condition
NASA Astrophysics Data System (ADS)
Roulias, D.; Loutas, T. H.; Kostopoulos, V.
2012-05-01
Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.
Jia, Lei; Yarlagadda, Ramya; Reed, Charles C.
2015-01-01
Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find “hot spots” in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants’ experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227
Jia, Lei; Yarlagadda, Ramya; Reed, Charles C
2015-01-01
Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html) is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG) and melting temperature change (dTm) were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor) and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models. PMID:26361227
Estimating the complexity of 3D structural models using machine learning methods
NASA Astrophysics Data System (ADS)
Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques
2016-04-01
Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.
[Modelling a penicillin fed-batch fermentation using least squares support vector machines].
Liu, Yi; Wang, Hai-Qing
2006-01-01
The biochemical processes are usually characterized as seriously time varying and nonlinear dynamic systems. Building their first-principle models are very costly and difficult due to the absence of inherent mechanism and efficient on-line sensors. Furthermore, these detailed and complicated models do not necessary guarantee a good performance in practice. An approach via least squares support vector machines (LS-SVM) based on Pensim simulator is proposed for modelling the penicillin fed-batch fermentation process, and the adjustment strategy for parameters of LS-SVM is presented. Based on the proposed modelling method, the predictive models of penicillin concentration, biomass concentration and substrate concentration are obtained by using very limited on-line measurements. The results show that the models established are more accurate and efficient, and suffice for the requirements of control and optimization for biochemical processes. PMID:16572855
Modeling of surface topography in single-point diamond turning machine.
Huang, Chih-Yu; Liang, Rongguang
2015-08-10
Surface roughness is an important factor in characterizing the performance of high-precision optical surfaces. In this paper, we propose a model to estimate the surface roughness generated by a single-point diamond turning machine. In this model, we take into consideration the basic tool-cutting parameters as well as the relative vibration between the tool and the workpiece in both the infeed and feeding directions. Current models focus on the relative tool-workpiece vibration in the infeed direction. However, based on our experimental measurements, the contribution of relative tool-workpiece vibration in the feeding direction is significant and cannot be ignored in the model. The proposed model is able to describe the surface topography for flat as well as cylindrical surfaces of the workpiece. It has the potential to describe more complex spherical surfaces or freeform surfaces. Our experimental study with metal materials shows good correlation between the model and the diamond-turned surfaces. PMID:26368364
Estimating Inflows to Lake Okeechobee Using Climate Indices: A Machine Learning Modeling Approach
NASA Astrophysics Data System (ADS)
Kalra, A.; Ahmad, S.
2008-12-01
The operation of regional water management systems that include lakes and storage reservoirs for flood control and water supply can be significantly improved by using climate indices. This research is focused on forecasting Lag 1 annual inflow to Lake Okeechobee, located in South Florida, using annual oceanic- atmospheric indices of Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), Atlantic Multidecadal Oscillation (AMO), and El Nino-Southern Oscillations (ENSO). Support Vector Machine (SVM) and Least Square Support Vector Machine (LSSVM), belonging to the class of data driven models, are developed to forecast annual lake inflow using annual oceanic-atmospheric indices data from 1914 to 2003. The models were trained with 80 years of data and tested for 10 years of data. Based on Correlation Coefficient, Root Means Square Error, and Mean Absolute Error model predictions were in good agreement with measured inflow volumes. Sensitivity analysis, performed to evaluate the effect of individual and coupled oscillations, revealed a strong signal for AMO and ENSO indices compared to PDO and NAO indices for one year lead-time inflow forecast. Inflow predictions from the SVM models were better when compared with the predictions obtained from feed forward back propagation Artificial Neural Network (ANN) models.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard
2013-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.
Deng, Li; Wang, Guohua; Chen, Bo
2015-01-01
In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740
Biosimilarity Assessments of Model IgG1-Fc Glycoforms Using a Machine Learning Approach.
Kim, Jae Hyun; Joshi, Sangeeta B; Tolbert, Thomas J; Middaugh, C Russell; Volkin, David B; Smalter Hall, Aaron
2016-02-01
Biosimilarity assessments are performed to decide whether 2 preparations of complex biomolecules can be considered "highly similar." In this work, a machine learning approach is demonstrated as a mathematical tool for such assessments using a variety of analytical data sets. As proof-of-principle, physical stability data sets from 8 samples, 4 well-defined immunoglobulin G1-Fragment crystallizable glycoforms in 2 different formulations, were examined (see More et al., companion article in this issue). The data sets included triplicate measurements from 3 analytical methods across different pH and temperature conditions (2066 data features). Established machine learning techniques were used to determine whether the data sets contain sufficient discriminative power in this application. The support vector machine classifier identified the 8 distinct samples with high accuracy. For these data sets, there exists a minimum threshold in terms of information quality and volume to grant enough discriminative power. Generally, data from multiple analytical techniques, multiple pH conditions, and at least 200 representative features were required to achieve the highest discriminative accuracy. In addition to classification accuracy tests, various methods such as sample space visualization, similarity analysis based on Euclidean distance, and feature ranking by mutual information scores are demonstrated to display their effectiveness as modeling tools for biosimilarity assessments. PMID:26869422
NASA Astrophysics Data System (ADS)
Goetz, Jason; Brenning, Alexander; Petschko, Helene; Leopold, Philip
2015-04-01
With so many techniques now available for landslide susceptibility modelling, it can be challenging to decide on which technique to apply. Generally speaking, the criteria for model selection should be tied closely to end users' purpose, which could be spatial prediction, spatial analysis or both. In our research, we focus on comparing the spatial predictive abilities of landslide susceptibility models. We illustrate how spatial cross-validation, a statistical approach for assessing spatial prediction performance, can be applied with the area under the receiver operating characteristic curve (AUROC) as a prediction measure for model comparison. Several machine learning and statistical techniques are evaluated for prediction in Lower Austria: support vector machine, random forest, bundling with penalized linear discriminant analysis, logistic regression, weights of evidence, and the generalized additive model. In addition to predictive performance, the importance of predictor variables in each model was estimated using spatial cross-validation by calculating the change in AUROC performance when variables are randomly permuted. The susceptibility modelling techniques were tested in three areas of interest in Lower Austria, which have unique geologic conditions associated with landslide occurrence. Overall, we found for the majority of comparisons that there were little practical or even statistically significant differences in AUROCs. That is the models' prediction performances were very similar. Therefore, in addition to prediction, the ability to interpret models for spatial analysis and the qualitative qualities of the prediction surface (map) are considered and discussed. The measure of variable importance provided some insight into the model behaviour for prediction, in particular for "black-box" models. However, there were no clear patterns in all areas of interest to why certain variables were given more importance over others.
Goldberg, L.F.
1990-08-01
The activities described in this report do not constitute a continuum but rather a series of linked smaller investigations in the general area of one- and two-dimensional Stirling machine simulation. The initial impetus for these investigations was the development and construction of the Mechanical Engineering Test Rig (METR) under a grant awarded by NASA to Dr. Terry Simon at the Department of Mechanical Engineering, University of Minnesota. The purpose of the METR is to provide experimental data on oscillating turbulent flows in Stirling machine working fluid flow path components (heater, cooler, regenerator, etc.) with particular emphasis on laminar/turbulent flow transitions. Hence, the initial goals for the grant awarded by NASA were, broadly, to provide computer simulation backup for the design of the METR and to analyze the results produced. This was envisaged in two phases: First, to apply an existing one-dimensional Stirling machine simulation code to the METR and second, to adapt a two-dimensional fluid mechanics code which had been developed for simulating high Rayleigh number buoyant cavity flows to the METR. The key aspect of this latter component was the development of an appropriate turbulence model suitable for generalized application to Stirling simulation. A final-step was then to apply the two-dimensional code to an existing Stirling machine for which adequate experimental data exist. The work described herein was carried out over a period of three years on a part-time basis. Forty percent of the first year`s funding was provided as a match to the NASA funds by the Underground Space Center, University of Minnesota, which also made its computing facilities available to the project at no charge.
NASA Astrophysics Data System (ADS)
Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.
2015-08-01
Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan
NASA Astrophysics Data System (ADS)
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Elizondo, Marcelo A.; Tuffner, Francis K.; Schneider, Kevin P.
2016-01-01
Unlike transmission systems, distribution feeders in North America operate under unbalanced conditions at all times, and generally have a single strong voltage source. When a distribution feeder is connected to a strong substation source, the system is dynamically very stable, even for large transients. However if a distribution feeder, or part of the feeder, is separated from the substation and begins to operate as an islanded microgrid, transient dynamics become more of an issue. To assess the impact of transient dynamics at the distribution level, it is not appropriate to use traditional transmission solvers, which generally assume transposed lines and balanced loads. Full electromagnetic solvers capture a high level of detail, but it is difficult to model large systems because of the required detail. This paper proposes an electromechanical transient model of synchronous machine for distribution-level modeling and microgrids. This approach includes not only the machine model, but also its interface with an unbalanced network solver, and a powerflow method to solve unbalanced conditions without a strong reference bus. The presented method is validated against a full electromagnetic transient simulation.
Fernandez-Lozano, Carlos; Cuiñas, Rubén F; Seoane, José A; Fernández-Blanco, Enrique; Dorado, Julian; Munteanu, Cristian R
2015-11-01
Signaling proteins are an important topic in drug development due to the increased importance of finding fast, accurate and cheap methods to evaluate new molecular targets involved in specific diseases. The complexity of the protein structure hinders the direct association of the signaling activity with the molecular structure. Therefore, the proposed solution involves the use of protein star graphs for the peptide sequence information encoding into specific topological indices calculated with S2SNet tool. The Quantitative Structure-Activity Relationship classification model obtained with Machine Learning techniques is able to predict new signaling peptides. The best classification model is the first signaling prediction model, which is based on eleven descriptors and it was obtained using the Support Vector Machines-Recursive Feature Elimination (SVM-RFE) technique with the Laplacian kernel (RFE-LAP) and an AUROC of 0.961. Testing a set of 3114 proteins of unknown function from the PDB database assessed the prediction performance of the model. Important signaling pathways are presented for three UniprotIDs (34 PDBs) with a signaling prediction greater than 98.0%. PMID:26297890
Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2015-01-01
Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797
Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Oommen, Thomas
2011-01-01
On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.