Abstract quantum computing machines and quantum computational logics
NASA Astrophysics Data System (ADS)
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
Clark, Edward B; Hickinbotham, Simon J; Stepney, Susan
2017-05-01
We present a novel stringmol-based artificial chemistry system modelled on the universal constructor architecture (UCA) first explored by von Neumann. In a UCA, machines interact with an abstract description of themselves to replicate by copying the abstract description and constructing the machines that the abstract description encodes. DNA-based replication follows this architecture, with DNA being the abstract description, the polymerase being the copier, and the ribosome being the principal machine in expressing what is encoded on the DNA. This architecture is semantically closed as the machine that defines what the abstract description means is itself encoded on that abstract description. We present a series of experiments with the stringmol UCA that show the evolution of the meaning of genomic material, allowing the concept of semantic closure and transitions between semantically closed states to be elucidated in the light of concrete examples. We present results where, for the first time in an in silico system, simultaneous evolution of the genomic material, copier and constructor of a UCA, giving rise to viable offspring. © 2017 The Author(s).
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Goldstein, Benjamin A.; Navar, Ann Marie; Carter, Rickey E.
2017-01-01
Abstract Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. PMID:27436868
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Automatic Review of Abstract State Machines by Meta Property Verification
NASA Technical Reports Server (NTRS)
Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia
2010-01-01
A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
1983-10-01
by block number) Naval Ship Structures; Composites . Glass Reinforced Plastics, Filament Winding, Minesweepers. 20. ABSTRACT (Continue on reverse side...associated with this method of manufacturing a ship hull out of Glass Reinforced Plastic (GRP). Winding machine and man- drel concepts were reviewed... machine and mandrel concepts were reviewed, as well as the structural requirements and possible materials. A design of a 1/5th scale (30 ft) model
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The VIPER microprocessor chip is partitioned into four levels of abstractions. At the highest level, VIPER is described with decreasingly abstract sets of functions in LCF-LSM. At the lowest level are the gate-level models in proprietary CAD languages. The block-level and gate-level specifications are also given in the ELLA simulation language. Among VIPER's deficiencies are the fact that there is no notion of external events in the top-level specification, and it is impossible to use the top-level specifications to prove abstract properties of programs running on VIPER computers. There is no complete proof that the gate-level specifications implement the top-level specifications. Cohn's proof that the major-state machine correctly implements the top-level specifications has no formal connection with any of the other proof attempts. None of the latter address resetting the machine, memory timeout, forced error, or single step modes.
2010-02-01
multi-agent reputation management. State abstraction is a technique used to allow machine learning technologies to cope with problems that have large...state abstrac- tion process to enable reinforcement learning in domains with large state spaces. State abstraction is vital to machine learning ...across a collective of independent platforms. These individual elements, often referred to as agents in the machine learning community, should exhibit both
Un-Building Blocks: A Model of Reverse Engineering and Applicable Heuristics
2015-12-01
CONCLUSIONS The machine does not isolate man from the great problems of nature but plunges him more deeply into them. Antoine de Saint-Exupery— Wind ...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Reverse engineering is the problem -solving activity that ensues when one takes a...Douglas Moses, Vice Provost for Academic Affairs iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Reverse engineering is the problem -solving
Design Methodology for Automated Construction Machines
1987-12-11
along with the design of a pair of machines which automate framework installation.-,, 20. DISTRIBUTION IAVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY... Development Assistant Professor of Civil Engineering and Laura A . Demsetz, David H. Levy, Bruce Schena Graduate Research Assistants December 11, 1987 U.S...are discussed along with the design of a pair of machines which automate framework installation. Preliminary analysis and testing indicate that these
A Unified Access Model for Interconnecting Heterogeneous Wireless Networks
2015-05-01
Defined Networking, OpenFlow, WiFi, LTE 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 18 19a. NAME OF...Machine Configurations with WiFi and LTE 4 2.3 Three Virtual Machine Configurations with WiFi and LTE 5 3. Results and Discussion 5 4. Summary and...WiFi and long-term evolution ( LTE ), and created a communication pathway between them via a central controller node. Our simulation serves as a
Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E
2017-06-14
Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
Time of Flight Estimation in the Presence of Outliers: A Biosonar-Inspired Machine Learning Approach
2013-08-29
REPORT Time of Flight Estimation in the Presence of Outliers: A biosonar -inspired machine learning approach 14. ABSTRACT 16. SECURITY CLASSIFICATION OF...installations, biosonar , remote sensing, sonar resolution, sonar accuracy, sonar energy consumption Nathan Intrator, Leon N Cooper Brown University...Presence of Outliers: A biosonar -inspired machine learning approach Report Title ABSTRACT When the Signal-to-Noise Ratio (SNR) falls below a certain
Rosen's (M,R) system as an X-machine.
Palmer, Michael L; Williams, Richard A; Gatherer, Derek
2016-11-07
Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2012-01-05
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.
Methods for Effective Virtual Screening and Scaffold-Hopping in Chemical Compounds
2007-04-04
contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 12 19a...Opterons with 4 GB of memory . We used the descriptor- spaces GF, ECZ3, and ErG (described in Section 4) for the evaluating the methods introduced in...screening: Use of data fusion and machine learning to enchance the effectiveness of sim- ilarity searching. J. Chem. Info. Model., (46):462–470, 2006. [18] J
Visualization of Learning Scenarios with UML4LD
ERIC Educational Resources Information Center
Laforcade, Pierre
2007-01-01
Present Educational Modelling Languages are used to formally specify abstract learning scenarios in a machine-interpretable format. Current tooling does not provide teachers/designers with some graphical facilities to help them in reusing existent scenarios. They need human-readable representations. This paper discusses the UML4LD experimental…
USSR Space Life Sciences Digest, issue 25
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1990-01-01
This is the twenty-fifth issue of NASA's Space Life Sciences Digest. It contains abstracts of 42 journal papers or book chapters published in Russian and of 3 Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. The abstracts in this issue have been identified as relevant to 26 areas of space biology and medicine. These areas include: adaptation, body fluids, botany, cardiovascular and respiratory systems, developmental biology, endocrinology, enzymology, equipment and instrumentation, exobiology, gravitational biology, habitability and environmental effects, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, psychology, radiobiology, reproductive system, and space biology and medicine.
The sixth generation robot in space
NASA Technical Reports Server (NTRS)
Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.
1990-01-01
The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.
STELAR: An experiment in the electronic distribution of astronomical literature
NASA Technical Reports Server (NTRS)
Warnock, A.; Vansteenburg, M. E.; Brotzman, L. E.; Gass, J.; Kovalsky, D.
1992-01-01
STELAR (Study of Electronic Literature for Astronomical Research) is a Goddard-based project designed to test methods of delivering technical literature in machine readable form. To that end, we have scanned a five year span of the ApJ, ApJ Supp, AJ and PASP, and have obtained abstracts for eight leading academic journals from NASA/STI CASI, which also makes these abstracts available through the NASA RECON system. We have also obtained machine readable versions of some journal volumes from the publishers, although in many instances, the final typeset versions are no longer available. The fundamental data object for the STELAR database is the article, a collection of items associated with a scientific paper - abstract, scanned pages (in a variety of formats), figures, OCR extractions, forward and backward references, errata and versions of the paper in various formats (e.g., TEX, SGML, PostScript, DVI). Articles are uniquely referenced in the database by journal name, volume number and page number. The selection and delivery of articles is accomplished through the WAIS (Wide Area Information Server) client/server models requiring only an Internet connection. Modest modifications to the server code have made it capable of delivering the multiple data types required by STELAR. WAIS is a platform independent and fully open multi-disciplinary delivery system, originally developed by Thinking Machines Corp. and made available free of charge. It is based on the ISO Z39.50 standard communications protocol. WAIS servers run under both UNIX and VMS. WAIS clients run on a wide variety of machines, from UNIX-based Xwindows systems to MS-DOS and macintosh microcomputers. The WAIS system includes full-test indexing and searching of documents, network interface and easy access to a variety of document viewers. ASCII versions of the CASI abstracts have been formatted for display and the full test of the abstracts has been indexed. The entire WAIS database of abstracts is now available for use by the astronomical community. Enhancements of the search and retrieval system are under investigation to include specialized searches (by reference, author or keyword, as opposed to full test searches), improved handling of word stems, improvements in relevancy criteria and other retrieval techniques, such as factor spaces. The STELAR project has been assisted by the full cooperation of the AAS, the ASP, the publishers of the academic journals, librarians from GSFC, NRAO and STScI, the Library of Congress, and the University of North Carolina at Chapel Hill.
ERIC Educational Resources Information Center
Byrne, Jerry R.
1975-01-01
Investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The combination of titles and abstracts came the closest to 100 percent retrieval. (Author/PF)
Abstracts of AF Materials Laboratory Reports
1975-09-01
NO: TITLE: AUTHOR(S): CONTRACT NO; CONTRACTOR: AFML-TR-73-307 200,397 IMPROVED AUTOMATED TAPE LAYING MACHINE M. Poullos, W. J. Murray, D.L...AUTOMATED IMPROVED AUTOMATED TAPE LAYING MACHINE AUTOMATION AUTOMATION OF COATING PROCESSES FOR GAS TURBINE DLADcS AND VANES 203222/111 203072...IMP90VE0 TAPE LAYING MACHINE IMPP)VED AUTOMATED TAPE LAYING MACHINE A STUDY O^ THE STRESS-STRAIN TEHAVIOR OF GRAPHITE
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
Workshop on Algorithms for Time-Series Analysis
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2012-04-01
abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
Abstract Machines for Polymorphous Computing
2007-12-01
s/ /s/ MARK NOVAK WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division Information...models and LLCs have been developed for Raw, MONARCH [18][19], TRIPS [20][21], and Smart Memories [22][23]. These research projects were conducted...used here. In our approach on Raw, two key concepts are used to fully leverage the Raw architecture [34]. First, the tile grid is viewed as a
Model A: High-Temperature Tribometer
1992-02-01
spring loaded collet which grips the pin. In previous machines Inconel 625 collets and sleeves with 450 contact angles were used without collet...Triboeter, high temperature, friction, wear 11 1 08__ 19 ABSTRACT (Continue on revere if necewry and identify by blck number) A high temperature...tribometer has been specifically designed and fabricated to accurately measure, in real time, friction and wear characteristics of materials at temperatures
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
1980-03-01
ordinates. 3. APPARATUS 3.1 Models 3.1.1 Wings. - The three semispan wing models were each machined from a solid billet of 17 - 4PH stainless steel by a... 17 . DISTRIBUTION STATEMENT (of the abstract entered in Block 20, If different from Report) 18. SUPPLEMENTARY NOTES 1. KEY WORDS (Continue on reverse...Results .. ................. .... 17 5.3.1 Force data .. ................ ...... 17 5.3.2 Pressure data. .. ............... ..... 17 5.3.3 Fuselage
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Sing, David C; Metz, Lionel N; Dudli, Stefan
2017-06-01
Retrospective review. To identify the top 100 spine research topics. Recent advances in "machine learning," or computers learning without explicit instructions, have yielded broad technological advances. Topic modeling algorithms can be applied to large volumes of text to discover quantifiable themes and trends. Abstracts were extracted from the National Library of Medicine PubMed database from five prominent peer-reviewed spine journals (European Spine Journal [ESJ], The Spine Journal [SpineJ], Spine, Journal of Spinal Disorders and Techniques [JSDT], Journal of Neurosurgery: Spine [JNS]). Each abstract was entered into a latent Dirichlet allocation model specified to discover 100 topics, resulting in each abstract being assigned a probability of belonging in a topic. Topics were named using the five most frequently appearing terms within that topic. Significance of increasing ("hot") or decreasing ("cold") topic popularity over time was evaluated with simple linear regression. From 1978 to 2015, 25,805 spine-related research articles were extracted and classified into 100 topics. Top two most published topics included "clinical, surgeons, guidelines, information, care" (n = 496 articles) and "pain, back, low, treatment, chronic" (424). Top two hot trends included "disc, cervical, replacement, level, arthroplasty" (+0.05%/yr, P < 0.001), and "minimally, invasive, approach, technique" (+0.05%/yr, P < 0.001). By journal, the most published topics were ESJ-"operative, surgery, postoperative, underwent, preoperative"; SpineJ-"clinical, surgeons, guidelines, information, care"; Spine-"pain, back, low, treatment, chronic"; JNS- "tumor, lesions, rare, present, diagnosis"; JSDT-"cervical, anterior, plate, fusion, ACDF." Topics discovered through latent Dirichlet allocation modeling represent unbiased meaningful themes relevant to spine care. Topic dynamics can provide historical context and direction for future research for aspiring investigators and trainees interested in spine careers. Please explore https://singdc.shinyapps.io/spinetopics. N A.
Machine characterization based on an abstract high-level language machine
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene
1989-01-01
Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.
A Technique for Machine-Aided Indexing
ERIC Educational Resources Information Center
Klingbiel, Paul H.
1973-01-01
The technique for machine-aided indexing developed at the Defense Documentation Center (DDC) is illustrated on a randomly chosen abstract. Additional text is provided in coded form so that the reader can more fully explore this technique. (2 references) (Author)
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
Machine Visual Motion Detection Modeled on Vertebrate Retina
1988-01-01
18. NUMBER OF PAGES 9 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form...mechanism of direction selectivity. (a) shows the use of persistent lateral inhibition to block conduction in the null direction. ( b ) shows the use of...Bipolar elements Bipolar ( B ) elements compare the inputs from local receptor and horizontal elements, passing on the positive value of the difference
USSR Space Life Sciences Digest, Issue 18
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Donaldson, P. Lynn (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the 18th issue of NASA's USSR Life Sciences Digest. It contains abstracts of 50 papers published in Russian language periodicals or presented at conferences and of 8 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. A review of a recent Aviation Medicine Handbook is also included. The abstracts in this issue have been identified as relevant to 37 areas of space biology and medicine. These areas are: adaptation, aviation medicine, biological rhythms, biospherics, body fluids, cardiovascular and respiratory systems, cytology, developmental biology, endocrinology, enzymology, equipment and instrumentation, exobiology, gastrointestinal system, genetics, gravitational biology, group dynamics, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, reproductive biology, space biology and medicine, and space industrialization.
USSR Space Life Sciences Digest, issue 16
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Teeter, Ronald (Editor); Siegel, Bette (Editor); Donaldson, P. Lynn (Editor); Leveton, Lauren B. (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the sixteenth issue of NASA's USSR Life Sciences Digest. It contains abstracts of 57 papers published in Russian language periodicals or presented at conferences and of 2 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. An additional feature is the review of a book concerned with metabolic response to the stress of space flight. The abstracts included in this issue are relevant to 33 areas of space biology and medicine. These areas are: adaptation, biological rhythms, bionics, biospherics, body fluids, botany, cardiovascular and respiratory systems, developmental biology, endocrinology, enzymology, exobiology, gastrointestinal system, genetics, gravitational biology, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, reproductive biology, and space biology.
1993-01-01
engineering has led to many AI systems that are now regularly used in industry and elsewhere. The ultimate test of machine learning , the subfield of Al that...applications of machine learning suggest the time was ripe for a meeting on this topic. For this reason, Pat Langley (Siemens Corporate Research) and Yves...Kodratoff (Universite de Paris, Sud) organized an invited workshop on applications of machine learning . The goal of the gathering was to familiarize
FFATA: Mechine Augmented Composites for Structures with High Damping with High Stiffness
2012-12-05
applied , the inner channel will be the same width. The best LHG machines have the Z...Instron5567 screw controlled machine is suited to experiments up to 0.2Hz and a bit higher if operators are careful. These experiments applied ...REPORT FFATA: MACHINE AUGMENTED COMPOSITES FOR STRUCTURES WITH HIGH DAMPING WITH HIGH STIFFNESS 14. ABSTRACT 16. SECURITY CLASSIFICATION OF:
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines
1989-09-01
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas
"What is relevant in a text document?": An interpretable machine learning approach
Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert
2017-01-01
Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
Design and fabrication of complete dentures using CAD/CAM technology
Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi
2017-01-01
Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686
Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon
2016-12-01
This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
2008-09-01
2004), forward scattering and backscattering from a sand dollar test, a bivalve shell , and a machined aluminum disk of similar size were measured over a...Abstract Benthic shells can contribute greatly to the scattering variability of the ocean bottom, particularly at low grazing angles. Among the...effects of shell aggregates are increased scattering strength and potential subcritical angle penetration of the seafloor. Sand dollars (Dendraster
2008-09-01
results. In Stanton and Chu (2004), forward scattering and backscattering from a sand dollar test, a bivalve shell , and a machined aluminum disk of...Oceanographic Institution Abstract Benthic shells can contribute greatly to the scattering variability of the ocean bottom, particularly at low...grazing angles. Among the effects of shell aggregates are increased scattering strength and potential subcritical angle penetration of the seafloor
PUP: An Architecture to Exploit Parallel Unification in Prolog
1988-03-01
environment stacking mo del similar to the Warren Abstract Machine [23] since it has been shown to be super ior to other known models (see [21]). The storage...execute in groups of independent operations. Unifications belonging to different group s may not overlap. Also unification operations belonging to the...since all parallel operations on the unification units must complete before any of the units can star t executing the next group of parallel
NASA Astrophysics Data System (ADS)
Mugan, Jonathan; Khalili, Aram E.
2014-05-01
Current computer systems are dumb automatons, and their blind execution of instructions makes them open to attack. Their inability to reason means that they don't consider the larger, constantly changing context outside their immediate inputs. Their nearsightedness is particularly dangerous because, in our complex systems, it is difficult to prevent all exploitable situations. Additionally, the lack of autonomous oversight of our systems means they are unable to fight through attacks. Keeping adversaries completely out of systems may be an unreasonable expectation, and our systems need to adapt to attacks and other disruptions to achieve their objectives. What is needed is an autonomous controller within the computer system that can sense the state of the system and reason about that state. In this paper, we present Self-Awareness Through Predictive Abstraction Modeling (SATPAM). SATPAM uses prediction to learn abstractions that allow it to recognize the right events at the right level of detail. These abstractions allow SATPAM to break the world into small, relatively independent, pieces that allow employment of existing reasoning methods. SATPAM goes beyond classification-based machine learning and statistical anomaly detection to be able to reason about the system, and SATPAM's knowledge representation and reasoning is more like that of a human. For example, humans intuitively know that the color of a car is not relevant to any mechanical problem, and SATPAM provides a plausible method whereby a machine can acquire such reasoning patterns. In this paper, we present the initial experimental results using SATPAM.
Is searching full text more effective than searching abstracts?
Lin, Jimmy
2009-01-01
Background With the growing availability of full-text articles online, scientists and other consumers of the life sciences literature now have the ability to go beyond searching bibliographic records (title, abstract, metadata) to directly access full-text content. Motivated by this emerging trend, I posed the following question: is searching full text more effective than searching abstracts? This question is answered by comparing text retrieval algorithms on MEDLINE® abstracts, full-text articles, and spans (paragraphs) within full-text articles using data from the TREC 2007 genomics track evaluation. Two retrieval models are examined: bm25 and the ranking algorithm implemented in the open-source Lucene search engine. Results Experiments show that treating an entire article as an indexing unit does not consistently yield higher effectiveness compared to abstract-only search. However, retrieval based on spans, or paragraphs-sized segments of full-text articles, consistently outperforms abstract-only search. Results suggest that highest overall effectiveness may be achieved by combining evidence from spans and full articles. Conclusion Users searching full text are more likely to find relevant articles than searching only abstracts. This finding affirms the value of full text collections for text retrieval and provides a starting point for future work in exploring algorithms that take advantage of rapidly-growing digital archives. Experimental results also highlight the need to develop distributed text retrieval algorithms, since full-text articles are significantly longer than abstracts and may require the computational resources of multiple machines in a cluster. The MapReduce programming model provides a convenient framework for organizing such computations. PMID:19192280
Feel, imagine and learn! - Haptic augmented simulation and embodied instruction in physics learning
NASA Astrophysics Data System (ADS)
Han, In Sook
The purpose of this study was to investigate the potentials and effects of an embodied instructional model in abstract concept learning. This embodied instructional process included haptic augmented educational simulation as an instructional tool to provide perceptual experiences as well as further instruction to activate those previous experiences with perceptual simulation. In order to verify the effectiveness of this instructional model, haptic augmented simulation with three different haptic levels (force and kinesthetic, kinesthetic, and non-haptic) and instructional materials (narrative and expository) were developed and their effectiveness tested. 220 fifth grade students were recruited to participate in the study from three elementary schools located in lower SES neighborhoods in Bronx, New York. The study was conducted for three consecutive weeks in regular class periods. The data was analyzed using ANCOVA, ANOVA, and MANOVA. The result indicates that haptic augmented simulations, both the force and kinesthetic and the kinesthetic simulations, was more effective than the non-haptic simulation in providing perceptual experiences and helping elementary students to create multimodal representations about machines' movements. However, in most cases, force feedback was needed to construct a fully loaded multimodal representation that could be activated when the instruction with less sensory modalities was being given. In addition, the force and kinesthetic simulation was effective in providing cognitive grounding to comprehend a new learning content based on the multimodal representation created with enhanced force feedback. Regarding the instruction type, it was found that the narrative and the expository instructions did not make any difference in activating previous perceptual experiences. These findings suggest that it is important to help students to make a solid cognitive ground with perceptual anchor. Also, sequential abstraction process would deepen students' understanding by providing an opportunity to practice their mental simulation by removing sensory modalities used one by one and to gradually reach abstract level of understanding where students can imagine the machine's movements and working mechanisms with only abstract language without any perceptual supports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
USSR Space Life Sciences Digest, issue 19
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor); Donaldson, P. Lynn (Editor); Teeter, Ronald (Editor); Garshnek, Victoria (Editor); Rowe, Joseph (Editor)
1988-01-01
This is the 19th issue of NASA's USSR Space Life Sciences Digest. It contains abstracts of 47 papers published in Russian language periodicals or presented at conferences and of 5 new Soviet monographs. Selected abstracts are illustrated with figures and tables from the original. Reports on two conferences, one on adaptation to high altitudes, and one on space and ecology are presented. A book review of a recent work on high altitude physiology is also included. The abstracts in this issue have been identified as relevant to 33 areas of space biology and medicine. These areas are: adaptation, biological rhythms, biospherics, body fluids, botany, cardiovascular and respiratory systems, cytology, developmental biology, endocrinology, enzymology, biology, group dynamics, habitability and environmental effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, perception, personnel selection, psychology, radiobiology, and space biology and medicine.
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under MVS/XA, host and target Completion...Joint Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of
ERGONOMICS ABSTRACTS 48347-48982.
ERIC Educational Resources Information Center
Ministry of Technology, London (England). Warren Spring Lab.
IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…
Comparison of Document Data Bases
ERIC Educational Resources Information Center
Schipma, Peter B.; And Others
This paper presents a detailed analysis of the content and format of seven machine-readable bibliographic data bases: Chemical Abstracts Service Condensates, Chemical and Biological Activities, and Polymer Science and Technology, Biosciences Information Service's BA Previews including Biological Abstracts and BioResearch Index, Institute for…
Modeling Medical Ethics through Intelligent Agents
NASA Astrophysics Data System (ADS)
Machado, José; Miranda, Miguel; Abelha, António; Neves, José; Neves, João
The amount of research using health information has increased dramatically over the last past years. Indeed, a significative number of healthcare institutions have extensive Electronic Health Records (EHR), collected over several years for clinical and teaching purposes, but are uncertain as to the proper circumstances in which to use them to improve the delivery of care to the ones in need. Research Ethics Boards in Portugal and elsewhere in the world are grappling with these issues, but lack clear guidance regarding their role in the creation of and access to EHRs. However, we feel we have an effective way to handle Medical Ethics if we look to the problem under a structured and more rational way. Indeed, we felt that physicians were not aware of the relevance of the subject in their pre-clinical years, but their interest increase when they were exposed to patients. On the other hand, once EHRs are stored in machines, we also felt that we had to find a way to ensure that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. Therefore, in this article we discuss the importance of machine ethics and the need for machines that represent ethical principles explicitly. It is also shown how a machine may abstract an ethical principle from a logical representation of ethical judgments and use that principle to guide its own behavior.
STATISTICAL EVALUATION OF CONFOCAL MICROSCOPY IMAGES
Abstract
In this study the CV is defined as the Mean/SD of the population of beads or pixels. Flow cytometry uses the CV of beads to determine if the machine is aligned correctly and performing properly. This CV concept to determine machine performance has been adapted to...
1988-09-01
Group Subgroup Command and control; Computational linguistics; expert system voice recognition; man- machine interface; U.S. Government 19 Abstract...simulates the characteristics of FRESH on a smaller scale. This study assisted NOSC in developing a voice-recognition, man- machine interface that could...scale. This study assisted NOSC in developing a voice-recogni- tion, man- machine interface that could be used with TONE and upgraded at a later date
Understanding of anesthesia machine function is enhanced with a transparent reality simulation.
Fischler, Ira S; Kaschub, Cynthia E; Lizdas, David E; Lampotang, Samsun
2008-01-01
Photorealistic simulations may provide efficient transfer of certain skills to the real system, but by being opaque may fail to encourage deeper learning of the structure and function of the system. Schematic simulations that are more abstract, with less visual fidelity but make system structure and function transparent, may enhance deeper learning and optimize retention and transfer of learning. We compared learning effectiveness of these 2 modes of externalizing the output of a common simulation engine (the Virtual Anesthesia Machine, VAM) that models machine function and dynamics and responds in real time to user interventions such as changes in gas flow or ventilation. Undergraduate students (n = 39) and medical students (n = 35) were given a single, 1-hour guided learning session with either a Transparent or an Opaque version of the VAM simulation. The following day, the learners' knowledge of machine components, function, and dynamics was tested. The Transparent-VAM groups scored higher than the Opaque-VAM groups on a set of multiple-choice questions concerning conceptual knowledge about anesthesia machines (P = 0.009), provided better and more complete explanations of component function (P = 0.003), and were more accurate in remembering and inferring cause-and-effect dynamics of the machine and relations among components (P = 0.003). Although the medical students outperformed undergraduates on all measures, a similar pattern of benefits for the Transparent VAM was observed for these 2 groups. Schematic simulations that transparently allow learners to visualize, and explore, underlying system dynamics and relations among components may provide a more effective mental model for certain systems. This may lead to a deeper understanding of how the system works, and therefore, we believe, how to detect and respond to potentially adverse situations.
Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval
Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene
2018-01-01
Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie PMID:29688379
A review of supervised machine learning applied to ageing research.
Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A
2017-04-01
Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.
1991-05-01
was received as bar stocks in the work hardened condition. Before machining, the copper rods were annealed at 400 °C in argon for one hour. This...ABSTRACT Large deformation uniaxial compression and fixed-end torsion (simple shear) experiments were conducted on annealed OFHC Copper to obtain its... annealing treatment produced an average grain diameter of 45 jim. Experimental Procedure Compression Tests All the compression tests were conducted with
Modeling Large-Scale Networks Using Virtual Machines and Physical Appliances
2014-01-27
downloaded and run locally. The lab solution couldn’t be based on ActiveX because the military Report Documentation Page Form ApprovedOMB No. 0704-0188...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 disallowed ActiveX support on...its systems, which made running an RDP client over ActiveX not possible. The challenges the SEI encountered in delivering the instruction were
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, E. W.
The Advanced Architecture and Portability Specialists team (AAPS) worked with a select set of LLNL application teams to develop and/or implement a portability strategy for next-generation architectures. The team also investigated new and updated programming models and helped develop programming abstractions targeting maintainability and performance portability. Significant progress was made on both fronts in FY17, resulting in multiple applications being significantly more prepared for the nextgeneration machines than before.
Runtime Verification of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2008-01-01
We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record
Automated Design Space Exploration with Aspen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spafford, Kyle L.; Vetter, Jeffrey S.
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Automated Design Space Exploration with Aspen
Spafford, Kyle L.; Vetter, Jeffrey S.
2015-01-01
Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less
Deep learning of mutation-gene-drug relations from the literature.
Lee, Kyubum; Kim, Byounggun; Choi, Yonghwa; Kim, Sunkyu; Shin, Wonho; Lee, Sunwon; Park, Sungjoon; Kim, Seongsoon; Tan, Aik Choon; Kang, Jaewoo
2018-01-25
Molecular biomarkers that can predict drug efficacy in cancer patients are crucial components for the advancement of precision medicine. However, identifying these molecular biomarkers remains a laborious and challenging task. Next-generation sequencing of patients and preclinical models have increasingly led to the identification of novel gene-mutation-drug relations, and these results have been reported and published in the scientific literature. Here, we present two new computational methods that utilize all the PubMed articles as domain specific background knowledge to assist in the extraction and curation of gene-mutation-drug relations from the literature. The first method uses the Biomedical Entity Search Tool (BEST) scoring results as some of the features to train the machine learning classifiers. The second method uses not only the BEST scoring results, but also word vectors in a deep convolutional neural network model that are constructed from and trained on numerous documents such as PubMed abstracts and Google News articles. Using the features obtained from both the BEST search engine scores and word vectors, we extract mutation-gene and mutation-drug relations from the literature using machine learning classifiers such as random forest and deep convolutional neural networks. Our methods achieved better results compared with the state-of-the-art methods. We used our proposed features in a simple machine learning model, and obtained F1-scores of 0.96 and 0.82 for mutation-gene and mutation-drug relation classification, respectively. We also developed a deep learning classification model using convolutional neural networks, BEST scores, and the word embeddings that are pre-trained on PubMed or Google News data. Using deep learning, the classification accuracy improved, and F1-scores of 0.96 and 0.86 were obtained for the mutation-gene and mutation-drug relations, respectively. We believe that our computational methods described in this research could be used as an important tool in identifying molecular biomarkers that predict drug responses in cancer patients. We also built a database of these mutation-gene-drug relations that were extracted from all the PubMed abstracts. We believe that our database can prove to be a valuable resource for precision medicine researchers.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
An Immanent Machine: Reconsidering Grades, Historical and Present
ERIC Educational Resources Information Center
Tocci, Charles
2010-01-01
At some point the mechanics of schooling begin running of their own accord. Such has become the case with grades (A's, B's, C's, etc.). This article reconsiders the history of grades through the concepts of immanence and abstract machines from the oeuvre of Deleuze and Guattari. In the first section, the history of grades as presently written…
2013-01-01
Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733
Programming the Navier-Stokes computer: An abstract machine model and a visual editor
NASA Technical Reports Server (NTRS)
Middleton, David; Crockett, Tom; Tomboulian, Sherry
1988-01-01
The Navier-Stokes computer is a parallel computer designed to solve Computational Fluid Dynamics problems. Each processor contains several floating point units which can be configured under program control to implement a vector pipeline with several inputs and outputs. Since the development of an effective compiler for this computer appears to be very difficult, machine level programming seems necessary and support tools for this process have been studied. These support tools are organized into a graphical program editor. A programming process is described by which appropriate computations may be efficiently implemented on the Navier-Stokes computer. The graphical editor would support this programming process, verifying various programmer choices for correctness and deducing values such as pipeline delays and network configurations. Step by step details are provided and demonstrated with two example programs.
Inventory of U.S. Health Care Data Bases, 1976-1987.
ERIC Educational Resources Information Center
Kralovec, Peter D.; Andes, Steven M.
This inventory contains summary abstracts of 305 current (1976-1987) non-bibliographic machine-readable databases and national health care data that have been created by public and private organizations throughout the United States. Each of the abstracts contains pertinent information on the sponsor or database, a description of the purpose and…
Kaplan, Jonas T.; Man, Kingson; Greening, Steven G.
2015-01-01
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application. PMID:25859202
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.
Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.
Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases
Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.
2015-01-01
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837
Du, Tianchuan; Liao, Li; Wu, Cathy H; Sun, Bilin
2016-11-01
Protein-protein interactions play essential roles in many biological processes. Acquiring knowledge of the residue-residue contact information of two interacting proteins is not only helpful in annotating functions for proteins, but also critical for structure-based drug design. The prediction of the protein residue-residue contact matrix of the interfacial regions is challenging. In this work, we introduced deep learning techniques (specifically, stacked autoencoders) to build deep neural network models to tackled the residue-residue contact prediction problem. In tandem with interaction profile Hidden Markov Models, which was used first to extract Fisher score features from protein sequences, stacked autoencoders were deployed to extract and learn hidden abstract features. The deep learning model showed significant improvement over the traditional machine learning model, Support Vector Machines (SVM), with the overall accuracy increased by 15% from 65.40% to 80.82%. We showed that the stacked autoencoders could extract novel features, which can be utilized by deep neural networks and other classifiers to enhance learning, out of the Fisher score features. It is further shown that deep neural networks have significant advantages over SVM in making use of the newly extracted features. Copyright © 2016. Published by Elsevier Inc.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Thermodynamic work from operational principles
NASA Astrophysics Data System (ADS)
Gallego, R.; Eisert, J.; Wilming, H.
2016-10-01
In recent years we have witnessed a concentrated effort to make sense of thermodynamics for small-scale systems. One of the main difficulties is to capture a suitable notion of work that models realistically the purpose of quantum machines, in an analogous way to the role played, for macroscopic machines, by the energy stored in the idealisation of a lifted weight. Despite several attempts to resolve this issue by putting forward specific models, these are far from realistically capturing the transitions that a quantum machine is expected to perform. In this work, we adopt a novel strategy by considering arbitrary kinds of systems that one can attach to a quantum thermal machine and defining work quantifiers. These are functions that measure the value of a transition and generalise the concept of work beyond those models familiar from phenomenological thermodynamics. We do so by imposing simple operational axioms that any reasonable work quantifier must fulfil and by deriving from them stringent mathematical condition with a clear physical interpretation. Our approach allows us to derive much of the structure of the theory of thermodynamics without taking the definition of work as a primitive. We can derive, for any work quantifier, a quantitative second law in the sense of bounding the work that can be performed using some non-equilibrium resource by the work that is needed to create it. We also discuss in detail the role of reversibility and correlations in connection with the second law. Furthermore, we recover the usual identification of work with energy in degrees of freedom with vanishing entropy as a particular case of our formalism. Our mathematical results can be formulated abstractly and are general enough to carry over to other resource theories than quantum thermodynamics.
Advanced light source: Compendium of user abstracts and technical reports,1993-1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
1997-04-01
This compendium contains abstracts written by users summarizing research completed or in progress from 1993-1996, ALS technical reports describing ongoing efforts related to improvement in machine operations and research and development projects, and information on ALS beamlines planned through 1998. Two tables of contents organize the user abstracts by beamline and by area of research, and an author index makes abstracts accessible by author and by principal investigator. Technical details for each beamline including whom to contact for additional information can be found in the beamline information section. Separate abstracts have been indexed into the database for contributions to thismore » compendium.« less
NASA Technical Reports Server (NTRS)
Hudlicka, Eva; Corker, Kevin
1988-01-01
In this paper, a problem-solving system which uses a multilevel causal model of its domain is described. The system functions in the role of a pilot's assistant in the domain of commercial air transport emergencies. The model represents causal relationships among the aircraft subsystems, the effectors (engines, control surfaces), the forces that act on an aircraft in flight (thrust, lift), and the aircraft's flight profile (speed, altitude, etc.). The causal relationships are represented at three levels of abstraction: Boolean, qualitative, and quantitative, and reasoning about causes and effects can take place at each of these levels. Since processing at each level has different characteristics with respect to speed, the type of data required, and the specificity of the results, the problem-solving system can adapt to a wide variety of situations. The system is currently being implemented in the KEE(TM) development environment on a Symbolics Lisp machine.
NASA Technical Reports Server (NTRS)
Tick, Evan
1987-01-01
This note describes an efficient software emulator for the Warren Abstract Machine (WAM) Prolog architecture. The version of the WAM implemented is called Lcode. The Lcode emulator, written in C, executes the 'naive reverse' benchmark at 3900 LIPS. The emulator is one of a set of tools used to measure the memory-referencing characteristics and performance of Prolog programs. These tools include a compiler, assembler, and memory simulators. An overview of the Lcode architecture is given here, followed by a description and listing of the emulator code implementing each Lcode instruction. This note will be of special interest to those studying the WAM and its performance characteristics. In general, this note will be of interest to those creating efficient software emulators for abstract machine architectures.
Design and analysis of an unconventional permanent magnet linear machine for energy harvesting
NASA Astrophysics Data System (ADS)
Zeng, Peng
This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof-of-concept unconventional permanent magnet (PM) linear generator is prototyped and tested to verify the simulation results of the FEA model. For the coil windings of 33, 66 and 165 turns, the output power of the machine is tested to have the output power of 65.6 mW, 189.1 mW, and 497.7 mW respectively with the maximum power density of 2.486 mW/cm3.
NASA Astrophysics Data System (ADS)
Rückwardt, M.; Göpfert, A.; Correns, M.; Schellhorn, M.; Linß, G.
2010-07-01
Coordinate measuring machines are high precession all-rounder in three dimensional measuring. Therefore the versatility of parameters and expandability of additionally hardware is very comprehensive. Consequently you need much expert knowledge of the user and mostly a lot of advanced information about the measuring object. In this paper a coordinate measuring machine and a specialized measuring machine are compared at the example of the measuring of eyeglass frames. For this case of three dimensional measuring challenges the main focus is divided into metrological and economical aspects. At first there is shown a fully automated method for tactile measuring of this abstract form. At second there is shown a comparison of the metrological characteristics of a coordinate measuring machine and a tracer for eyeglass frames. The result is in favour to the coordinate measuring machine. It was not surprising in these aspects. At last there is shown a comparison of the machine in front of the economical aspects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.; Buck, A. A.; Smith, R.
1994-10-01
The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
A Unified Approach to the Synthesis of Fully Testable Sequential Machines
1989-10-01
N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology
An Analysis of Hardware-Assisted Virtual Machine Based Rootkits
2014-06-01
certain aspects of TPM implementation just to name a few. HyperWall is an architecture proposed by Szefer and Lee to protect guest VMs from...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of virtual machine (VM) technology has expanded rapidly since AMD and Intel implemented ...Intel VT-x implementations of Blue Pill to identify commonalities in the respective versions’ attack methodologies from both a functional and technical
Dynamic partial reconfiguration of logic controllers implemented in FPGAs
NASA Astrophysics Data System (ADS)
Bazydło, Grzegorz; Wiśniewski, Remigiusz
2016-09-01
Technological progress in recent years benefits in digital circuits containing millions of logic gates with the capability for reprogramming and reconfiguring. On the one hand it provides the unprecedented computational power, but on the other hand the modelled systems are becoming increasingly complex, hierarchical and concurrent. Therefore, abstract modelling supported by the Computer Aided Design tools becomes a very important task. Even the higher consumption of the basic electronic components seems to be acceptable because chip manufacturing costs tend to fall over the time. The paper presents a modelling approach for logic controllers with the use of Unified Modelling Language (UML). Thanks to the Model Driven Development approach, starting with a UML state machine model, through the construction of an intermediate Hierarchical Concurrent Finite State Machine model, a collection of Verilog files is created. The system description generated in hardware description language can be synthesized and implemented in reconfigurable devices, such as FPGAs. Modular specification of the prototyped controller permits for further dynamic partial reconfiguration of the prototyped system. The idea bases on the exchanging of the functionality of the already implemented controller without stopping of the FPGA device. It means, that a part (for example a single module) of the logic controller is replaced by other version (called context), while the rest of the system is still running. The method is illustrated by a practical example by an exemplary Home Area Network system.
Automated annotation of functional imaging experiments via multi-label classification
Turner, Matthew D.; Chakrabarti, Chayan; Jones, Thomas B.; Xu, Jiawei F.; Fox, Peter T.; Luger, George F.; Laird, Angela R.; Turner, Jessica A.
2013-01-01
Identifying the experimental methods in human neuroimaging papers is important for grouping meaningfully similar experiments for meta-analyses. Currently, this can only be done by human readers. We present the performance of common machine learning (text mining) methods applied to the problem of automatically classifying or labeling this literature. Labeling terms are from the Cognitive Paradigm Ontology (CogPO), the text corpora are abstracts of published functional neuroimaging papers, and the methods use the performance of a human expert as training data. We aim to replicate the expert's annotation of multiple labels per abstract identifying the experimental stimuli, cognitive paradigms, response types, and other relevant dimensions of the experiments. We use several standard machine learning methods: naive Bayes (NB), k-nearest neighbor, and support vector machines (specifically SMO or sequential minimal optimization). Exact match performance ranged from only 15% in the worst cases to 78% in the best cases. NB methods combined with binary relevance transformations performed strongly and were robust to overfitting. This collection of results demonstrates what can be achieved with off-the-shelf software components and little to no pre-processing of raw text. PMID:24409112
Memarian, Negar; Torre, Jared B.; Haltom, Kate E.; Stanton, Annette L.
2017-01-01
Abstract Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. PMID:28992270
NASA Technical Reports Server (NTRS)
Caines, P. E.
1999-01-01
The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.
Software architecture for time-constrained machine vision applications
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.
2013-01-01
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
ERIC Educational Resources Information Center
Palmer, Crescentia
A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2016-04-01
Preliminary results of predicting distribution of soil organic soils (Histosols) and soil organic carbon stock (in tonnes per ha) using global compilations of soil profiles (about 150,000 points) and covariates at 250 m spatial resolution (about 150 covariates; mainly MODIS seasonal land products, SRTM DEM derivatives, climatic images, lithological and land cover and landform maps) are presented. We focus on using a data-driven approach i.e. Machine Learning techniques that often require no knowledge about the distribution of the target variable or knowledge about the possible relationships. Other advantages of using machine learning are (DOI: 10.1371/journal.pone.0125814): All rules required to produce outputs are formalized. The whole procedure is documented (the statistical model and associated computer script), enabling reproducible research. Predicted surfaces can make use of various information sources and can be optimized relative to all available quantitative point and covariate data. There is more flexibility in terms of the spatial extent, resolution and support of requested maps. Automated mapping is also more cost-effective: once the system is operational, maintenance and production of updates are an order of magnitude faster and cheaper. Consequently, prediction maps can be updated and improved at shorter and shorter time intervals. Some disadvantages of automated soil mapping based on Machine Learning are: Models are data-driven and any serious blunders or artifacts in the input data can propagate to order-of-magnitude larger errors than in the case of expert-based systems. Fitting machine learning models is at the order of magnitude computationally more demanding. Computing effort can be even tens of thousands higher than if e.g. linear geostatistics is used. Many machine learning models are fairly complex often abstract and any interpretation of such models is not trivial and require special multidimensional / multivariable plotting and data mining tools. Results of model fitting using the R packages nnet, randomForest and the h2o software (machine learning functions) show that significant models can be fitted for soil classes, bulk density (R-square 0.76), soil organic carbon (R-square 0.62) and coarse fragments (R-square 0.59). Consequently, we were able to estimate soil organic carbon stock for majority of the land mask (excluding permanent ice) and detect patches of landscape containing mainly organic soils (peat and similar). Our results confirm that hotspots of soil organic carbon in Tropics are peatlands in Indonesia, north of Peru, west Amazon and Congo river basin. Majority of world soil organic carbon stock is likely in the Northern latitudes (tundra and taiga of the north). Distribution of histosols seems to be mainly controlled by climatic conditions (especially temperature regime and water vapor) and hydrologic position in the landscape. Predicted distributions of organic soils (probability of occurrence) and total soil organic carbon stock at resolutions of 1 km and 250 m are available via the SoilGrids.org project homepage.
1980-05-31
34 International Journal of Man- Machine Studies , Vol. 9, No. 1, 1977, pp. 1-68. [16] Zimmermann, H. J., Theory and Applications of Fuzzy Sets, Institut...Boston, Inc., Hingham, MA, 1978. [18] Yager, R. R., "Multiple Objective Decision-Making Using Fuzzy Sets," International Journal of Man- Machine Studies ...Professor of Industria Engineering ... iv t TABLE OF CONTENTS page ABSTRACT .. .. . ...... . .... ...... ........ iii LIST OF TABLES
RI: Rheology as a Tool for Understanding the Mechanics of Live Ant Aggregations, Part 1
2016-11-04
measure rheological properties of biological fluids. Using this machine, we were able to characterize non -Newtonian fluids such as frog saliva...GA 30332 -0420 ABSTRACT Number of Papers published in peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report...order to measure rheological properties of biological fluids. Using this machine, we were able to characterize non -Newtonian fluids such as frog
Graduate student theses supported by DOE`s Environmental Sciences Division
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cushman, Robert M.; Parra, Bobbi M.
1995-07-01
This report provides complete bibliographic citations, abstracts, and keywords for 212 doctoral and master`s theses supported fully or partly by the U.S. Department of Energy`s Environmental Sciences Division (and its predecessors) in the following areas: Atmospheric Sciences; Marine Transport; Terrestrial Transport; Ecosystems Function and Response; Carbon, Climate, and Vegetation; Information; Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP); Atmospheric Radiation Measurement (ARM); Oceans; National Institute for Global Environmental Change (NIGEC); Unmanned Aerial Vehicles (UAV); Integrated Assessment; Graduate Fellowships for Global Change; and Quantitative Links. Information on the major professor, department, principal investigator, and program area is given for each abstract.more » Indexes are provided for major professor, university, principal investigator, program area, and keywords. This bibliography is also available in various machine-readable formats (ASCII text file, WordPerfect{reg_sign} files, and PAPYRUS{trademark} files).« less
Experience with abstract notation one
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.
Automatic processing of spoken dialogue in the home hemodialysis domain.
Lacson, Ronilda; Barzilay, Regina
2005-01-01
Spoken medical dialogue is a valuable source of information, and it forms a foundation for diagnosis, prevention and therapeutic management. However, understanding even a perfect transcript of spoken dialogue is challenging for humans because of the lack of structure and the verbosity of dialogues. This work presents a first step towards automatic analysis of spoken medical dialogue. The backbone of our approach is an abstraction of a dialogue into a sequence of semantic categories. This abstraction uncovers structure in informal, verbose conversation between a caregiver and a patient, thereby facilitating automatic processing of dialogue content. Our method induces this structure based on a range of linguistic and contextual features that are integrated in a supervised machine-learning framework. Our model has a classification accuracy of 73%, compared to 33% achieved by a majority baseline (p<0.01). This work demonstrates the feasibility of automatically processing spoken medical dialogue.
High Level Analysis, Design and Validation of Distributed Mobile Systems with
NASA Astrophysics Data System (ADS)
Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.
System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.
The evolution and practical application of machine translation system (1)
NASA Astrophysics Data System (ADS)
Tominaga, Isao; Sato, Masayuki
This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.
An examination of data quality on QSAR Modeling in regards ...
The development of QSAR models is critically dependent on the quality of available data. As part of our efforts to develop public platforms to provide access to predictive models, we have attempted to discriminate the influence of the quality versus quantity of data available to develop and validate QSAR models. We have focused our efforts on the widely used EPISuite software that was initially developed over two decades ago and, specifically, on the PHYSPROP dataset used to train the EPISuite prediction models. This presentation will review our approaches to examining key datasets, the delivery of curated data and the development of machine-learning models for thirteen separate property endpoints of interest to environmental science. We will also review how these data will be made freely accessible to the community via a new “chemistry dashboard”. This abstract does not reflect U.S. EPA policy. presentation at UNC-CH.
Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred
2018-01-01
Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537
A Critical Review of Options for Tool and Workpiece Sensing
1989-06-02
Tool Temperature Control ." International Machine Tool Design Res., Vol. 7, pp. 465-75, 1967. 5. Cook, N. H., Subramanian, K., and Basile, S. A...if necessury and identify by block riumber) FIELD GROUP SUB-GROUP 1. Detectors 3. Control Equipment 1 08 2. Sensor Characteristics 4. Process Control ...will provide conceptual designs and recommend a system (Continued) 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21 ABSTRACT SECURITY CLASSIFICATION 0
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-11-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-01-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016
ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
A grounded theory of abstraction in artificial intelligence.
Zucker, Jean-Daniel
2003-07-29
In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed.
Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene
2018-05-16
Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.
Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E
2018-01-01
Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Computational approaches for predicting biomedical research collaborations.
Zhang, Qing; Yu, Hong
2014-01-01
Biomedical research is increasingly collaborative, and successful collaborations often produce high impact work. Computational approaches can be developed for automatically predicting biomedical research collaborations. Previous works of collaboration prediction mainly explored the topological structures of research collaboration networks, leaving out rich semantic information from the publications themselves. In this paper, we propose supervised machine learning approaches to predict research collaborations in the biomedical field. We explored both the semantic features extracted from author research interest profile and the author network topological features. We found that the most informative semantic features for author collaborations are related to research interest, including similarity of out-citing citations, similarity of abstracts. Of the four supervised machine learning models (naïve Bayes, naïve Bayes multinomial, SVMs, and logistic regression), the best performing model is logistic regression with an ROC ranging from 0.766 to 0.980 on different datasets. To our knowledge we are the first to study in depth how research interest and productivities can be used for collaboration prediction. Our approach is computationally efficient, scalable and yet simple to implement. The datasets of this study are available at https://github.com/qingzhanggithub/medline-collaboration-datasets.
St-Maurice, Justin D; Burns, Catherine M
2017-07-28
Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient's domain and enable the exploration of the shared decision-making (SDM) paradigm. Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. ©Justin D St-Maurice, Catherine M Burns. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 28.07.2017.
2017-01-01
Background Health care is a complex sociotechnical system. Patient treatment is evolving and needs to incorporate the use of technology and new patient-centered treatment paradigms. Cognitive work analysis (CWA) is an effective framework for understanding complex systems, and work domain analysis (WDA) is useful for understanding complex ecologies. Although previous applications of CWA have described patient treatment, due to their scope of work patients were previously characterized as biomedical machines, rather than patient actors involved in their own care. Objective An abstraction hierarchy that characterizes patients as beings with complex social values and priorities is needed. This can help better understand treatment in a modern approach to care. The purpose of this study was to perform a WDA to represent the treatment of patients with medical records. Methods The methods to develop this model included the analysis of written texts and collaboration with subject matter experts. Our WDA represents the ecology through its functional purposes, abstract functions, generalized functions, physical functions, and physical forms. Results Compared with other work domain models, this model is able to articulate the nuanced balance between medical treatment, patient education, and limited health care resources. Concepts in the analysis were similar to the modeling choices of other WDAs but combined them in as a comprehensive, systematic, and contextual overview. The model is helpful to understand user competencies and needs. Future models could be developed to model the patient’s domain and enable the exploration of the shared decision-making (SDM) paradigm. Conclusion Our work domain model links treatment goals, decision-making constraints, and task workflows. This model can be used by system developers who would like to use ecological interface design (EID) to improve systems. Our hierarchy is the first in a future set that could explore new treatment paradigms. Future hierarchies could model the patient as a controller and could be useful for mobile app development. PMID:28754650
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
...: http://www.epa.gov/dockets . Abstract: The sources subject to this rule (i.e., extraction plants, ceramic plants, foundries, incinerators, propellant plants, and machine shops which process beryllium and...
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware
Zheng, Da; Burns, Randal; Szalay, Alexander S.
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads. PMID:24402052
Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.
Zheng, Da; Burns, Randal; Szalay, Alexander S
2013-01-01
We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.
Towards a generalized energy prediction model for machine tools
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan
2017-01-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
The impact of machine learning techniques in the study of bipolar disorder: A systematic review.
Librenza-Garcia, Diego; Kotzian, Bruno Jaskulski; Yang, Jessica; Mwangi, Benson; Cao, Bo; Pereira Lima, Luiza Nunes; Bermudez, Mariane Bagatin; Boeira, Manuela Vianna; Kapczinski, Flávio; Passos, Ives Cavalcante
2017-09-01
Machine learning techniques provide new methods to predict diagnosis and clinical outcomes at an individual level. We aim to review the existing literature on the use of machine learning techniques in the assessment of subjects with bipolar disorder. We systematically searched PubMed, Embase and Web of Science for articles published in any language up to January 2017. We found 757 abstracts and included 51 studies in our review. Most of the included studies used multiple levels of biological data to distinguish the diagnosis of bipolar disorder from other psychiatric disorders or healthy controls. We also found studies that assessed the prediction of clinical outcomes and studies using unsupervised machine learning to build more consistent clinical phenotypes of bipolar disorder. We concluded that given the clinical heterogeneity of samples of patients with BD, machine learning techniques may provide clinicians and researchers with important insights in fields such as diagnosis, personalized treatment and prognosis orientation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Formal techniques have been devoted to analyzing whether network protocol specifications violate security policies; however, these methods cannot detect vulnerabilities in the implementations of the network protocols themselves. Symbolic execution can be used to analyze the paths of the network protocol implementations, but for stateful network protocols, it is difficult to reach the deep states of the protocol. This paper proposes a novel model-guided approach to detect vulnerabilities in network protocol implementations. Our method first abstracts a finite state machine (FSM) model, then utilizes the model to guide the symbolic execution. This approach achieves high coverage of both the code and the protocol states. The proposed method is implemented and applied to test numerous real-world network protocol implementations. The experimental results indicate that the proposed method is more effective than traditional fuzzing methods such as SPIKE at detecting vulnerabilities in the deep states of network protocol implementations.
USSR Space Life Sciences Digest. Index to issues 15-20
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor)
1989-01-01
This bibliography provides an index to issues 15 through 20 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
USSR Space Life Sciences Digest. Index to issues 21-25
NASA Technical Reports Server (NTRS)
Hooke, Lydia Razran (Editor)
1990-01-01
This bibliography provides an index to issues 21 through 25 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
USSR Space Life Sciences Digest. Index to issues 26-29
NASA Technical Reports Server (NTRS)
Stone, Lydia Razran (Editor)
1991-01-01
This bibliography provides an index to issues 26 through 29 of the USSR Space Life Sciences Digest. There are two sections. The first section lists bibliographic citations of abstracts in these issues, grouped by topic area categories. The second section provides a key word index for the same abstracts. The topic categories include exobiology, space medicine and psychology, human performance and man-machine systems, various life/body systems, human behavior and adaptation, biospherics, and others.
Formal Validation of Fault Management Design Solutions
NASA Technical Reports Server (NTRS)
Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John
2013-01-01
The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.
A grounded theory of abstraction in artificial intelligence.
Zucker, Jean-Daniel
2003-01-01
In artificial intelligence, abstraction is commonly used to account for the use of various levels of details in a given representation language or the ability to change from one level to another while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem proving, knowledge representation (in particular for spatial and temporal reasoning) and machine learning. In such contexts, abstraction is defined as a mapping between formalisms that reduces the computational complexity of the task at stake. By analysing the notion of abstraction from an information quantity point of view, we pinpoint the differences and the complementary role of reformulation and abstraction in any representation change. We contribute to extending the existing semantic theories of abstraction to be grounded on perception, where the notion of information quantity is easier to characterize formally. In the author's view, abstraction is best represented using abstraction operators, as they provide semantics for classifying different abstractions and support the automation of representation changes. The usefulness of a grounded theory of abstraction in the cartography domain is illustrated. Finally, the importance of explicitly representing abstraction for designing more autonomous and adaptive systems is discussed. PMID:12903672
Absorption of language concepts in the machine mind
NASA Astrophysics Data System (ADS)
Kollár, Ján
2016-06-01
In our approach, the machine mind is the applicative dynamic system represented by its algorithmically evolvable internal language. By other words, the mind and the language of mind are synonyms. Coming out from Shaumyan's semiotic theory of languages, we present the representation of language concepts in the machine mind as a result of our experiment, to show non-redundancy of the language of mind. To provide useful restriction for further research, we also introduce the hypothesis of semantic saturation in Computer-Computer communication, which indicates that a set of machines is not self-evolvable. The goal of our research is to increase the abstraction of Human-Computer and Computer-Computer communication. If we want humans and machines comunicate as a parent with the child, using different symbols and media, we must find the language of mind commonly usable by both machines and humans. In our opinion, there exist a kind of calm language of thinking, which we try to propose for machines in this paper. We separate the layers of a machine mind, we present the structure of the evolved mind and we discuss the selected properties. We are concentrating on the representation of symbolized concepts in the mind, that are languages, not just grammars, since they have meaning.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
The universal numbers. From Biology to Physics.
Marchal, Bruno
2015-12-01
I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Developing a PLC-friendly state machine model: lessons learned
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans
2014-07-01
Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we've learned during the development process of such a "PLC-friendly" state machine model.
Reconceptualizing the classification of PNAS articles
Airoldi, Edoardo M.; Erosheva, Elena A.; Fienberg, Stephen E.; Joutard, Cyrille; Love, Tanzy; Shringarpure, Suyash
2010-01-01
PNAS article classification is rooted in long-standing disciplinary divisions that do not necessarily reflect the structure of modern scientific research. We reevaluate that structure using latent pattern models from statistical machine learning, also known as mixed-membership models, that identify semantic structure in co-occurrence of words in the abstracts and references. Our findings suggest that the latent dimensionality of patterns underlying PNAS research articles in the Biological Sciences is only slightly larger than the number of categories currently in use, but it differs substantially in the content of the categories. Further, the number of articles that are listed under multiple categories is only a small fraction of what it should be. These findings together with the sensitivity analyses suggest ways to reconceptualize the organization of papers published in PNAS. PMID:21078953
USSR Space Life Sciences Digest, issue 8
NASA Technical Reports Server (NTRS)
Hooke, L. R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor); Teeter, R. (Editor)
1985-01-01
This is the eighth issue of NASA's USSR Space Life Sciences Digest. It contains abstracts of 48 papers recently published in Russian language periodicals and bound collections and of 10 new Soviet monographs. Selected abstracts are illustrated with figures and tables. Additional features include reviews of two Russian books on radiobiology and a description of the latest meeting of an international working group on remote sensing of the Earth. Information about English translations of Soviet materials available to readers is provided. The topics covered in this issue have been identified as relevant to 33 areas of aerospace medicine and space biology. These areas are: adaptation, biological rhythms, biospherics, body fluids, botany, cardiovascular and respiratory systems, cosmonaut training, cytology, endocrinology, enzymology, equipment and instrumentation, exobiology, gastrointestinal system, genetics, group dynamics, habitability and environment effects, hematology, human performance, immunology, life support systems, man-machine systems, mathematical modeling, metabolism, microbiology, musculoskeletal system, neurophysiology, nutrition, operational medicine, personnel selection, psychology, reproductive biology, and space biology and medicine.
78 FR 20101 - Access to Confidential Business Information by Chemical Abstract Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... identification, pass through a metal detector, and sign the EPA visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be...
Text Mining for Protein Docking
Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.
2015-01-01
The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Discovering governing equations from data by sparse identification of nonlinear dynamics
NASA Astrophysics Data System (ADS)
Brunton, Steven
The ability to discover physical laws and governing equations from data is one of humankind's greatest intellectual achievements. A quantitative understanding of dynamic constraints and balances in nature has facilitated rapid development of knowledge and enabled advanced technology, including aircraft, combustion engines, satellites, and electrical power. There are many more critical data-driven problems, such as understanding cognition from neural recordings, inferring patterns in climate, determining stability of financial markets, predicting and suppressing the spread of disease, and controlling turbulence for greener transportation and energy. With abundant data and elusive laws, data-driven discovery of dynamics will continue to play an increasingly important role in these efforts. This work develops a general framework to discover the governing equations underlying a dynamical system simply from data measurements, leveraging advances in sparsity-promoting techniques and machine learning. The resulting models are parsimonious, balancing model complexity with descriptive ability while avoiding overfitting. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions. This perspective, combining dynamical systems with machine learning and sparse sensing, is explored with the overarching goal of real-time closed-loop feedback control of complex systems. This is joint work with Joshua L. Proctor and J. Nathan Kutz. Video Abstract: https://www.youtube.com/watch?v=gSCa78TIldg
CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: LASER POWER MEASUREMENTS
Laser power abstract
The reliability of the confocal laser-scanning microscope (CLSM) to obtain intensity measurements and quantify fluorescence data is dependent on using a correctly aligned machine that contains a stable laser power. The laser power test appears to be one ...
Woody species susceptibility to forest herbicides applied by ground machines
James H. Miller; M. Boyd Edwards
1996-01-01
Abstract. This study used a simple approach of post-treatment observations to colleot data on hexbicide effectiveness for common southeastern hardwoods and shrub species, and for loblolly pine. Both site preparation and release herbicides labeled for loblolly pine were examiued.
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-05-04
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.
Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta
2008-04-22
Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge.
Street, Amy E.; Rosellini, Anthony J.; Ursano, Robert J.; Heeringa, Steven G.; Hill, Eric D.; Monahan, John; Naifeh, James A.; Petukhova, Maria V.; Reis, Ben Y.; Sampson, Nancy A.; Bliese, Paul D.; Stein, Murray B.; Zaslavsky, Alan M.; Kessler, Ronald C.
2016-01-01
Sexual violence victimization is a significant problem among female U.S. military personnel. Preventive interventions for high-risk individuals might reduce prevalence, but would require accurate targeting. We attempted to develop a targeting model for female Regular U.S. Army soldiers based on theoretically-guided predictors abstracted from administrative data records. As administrative reports of sexual assault victimization are known to be incomplete, parallel machine learning models were developed to predict administratively-recorded (in the population) and self-reported (in a representative survey) victimization. Capture-recapture methods were used to combine predictions across models. Key predictors included low status, crime involvement, and treated mental disorders. Area under the Receiver Operating Characteristic curve was .83−.88. 33.7-63.2% of victimizations occurred among soldiers in the highest-risk ventile (5%). This high concentration of risk suggests that the models could be useful in targeting preventive interventions, although final determination would require careful weighing of intervention costs, effectiveness, and competing risks. PMID:28154788
Utilization and Monetization of Healthcare Data in Developing Countries
Bram, Joshua T.; Warwick-Clark, Boyd; Obeysekare, Eric; Mehta, Khanjan
2015-01-01
Abstract In developing countries with fledgling healthcare systems, the efficient deployment of scarce resources is paramount. Comprehensive community health data and machine learning techniques can optimize the allocation of resources to areas, epidemics, or populations most in need of medical aid or services. However, reliable data collection in low-resource settings is challenging due to a wide range of contextual, business-related, communication, and technological factors. Community health workers (CHWs) are trusted community members who deliver basic health education and services to their friends and neighbors. While an increasing number of programs leverage CHWs for last mile data collection, a fundamental challenge to such programs is the lack of tangible incentives for the CHWs. This article describes potential applications of health data in developing countries and reviews the challenges to reliable data collection. Four practical CHW-centric business models that provide incentive and accountability structures to facilitate data collection are presented. Creating and strengthening the data collection infrastructure is a prerequisite for big data scientists, machine learning experts, and public health administrators to ultimately elevate and transform healthcare systems in resource-poor settings. PMID:26487984
NASA Astrophysics Data System (ADS)
Gengenbach, Ulrich K.; Hofmann, Andreas; Engelhardt, Friedhelm; Scharnowell, Rudolf; Koehler, Bernd
2001-10-01
A large number of microgrippers has been developed in industry and academia. Although the importance of hybrid integration techniques and hence the demand for assembly tools grows continuously a large part of these developments has not yet been used in industrial production. The first grippers developed for microassembly were basically vacuum grippers and downscaled tweezers. Due to increasingly complex assembly tasks more and more functionality such as sensing or additional functions such as adhesive dispensing has been integrated into gripper systems over the last years. Most of these gripper systems are incompatible since there exists no standard interface to the assembly machine and no standard for the internal modules and interfaces. Thus these tools are not easily interchangeable between assembly machines and not easily adaptable to assembly tasks. In order to alleviate this situation a construction kit for modular microgrippers is being developed. It is composed of modules with well defined interfaces that can be combined to build task specific grippers. An abstract model of a microgripper is proposed as a tool to structure the development of the construction kit. The modular concept is illustrated with prototypes.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Wind energy utilization: A bibliography
NASA Technical Reports Server (NTRS)
1975-01-01
Bibliography cites documents published to and including 1974 with abstracts and references, and is indexed by topic, author, organization, title, and keywords. Topics include: Wind Energy Potential and Economic Feasibility, Utilization, Wind Power Plants and Generators, Wind Machines, Wind Data and Properties, Energy Storage, and related topics.
Vorberg, Susann
2013-01-01
Abstract Biodegradability describes the capacity of substances to be mineralized by free‐living bacteria. It is a crucial property in estimating a compound’s long‐term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. PMID:27485201
Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules
Chowdhury, Debashish
2013-01-01
A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jing; Li, Yuan-Yuan; Shanghai Center for Bioinformation Technology, Shanghai 200235
2012-03-02
Highlights: Black-Right-Pointing-Pointer Proper dataset partition can improve the prediction of deleterious nsSNPs. Black-Right-Pointing-Pointer Partition according to original residue type at nsSNP is a good criterion. Black-Right-Pointing-Pointer Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNPmore » site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.« less
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Families of Graph Algorithms: SSSP Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-08-28
Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less
Heymann, Michael; Degani, Asaf
2007-04-01
We present a formal approach and methodology for the analysis and generation of user interfaces, with special emphasis on human-automation interaction. A conceptual approach for modeling, analyzing, and verifying the information content of user interfaces is discussed. The proposed methodology is based on two criteria: First, the interface must be correct--that is, given the interface indications and all related information (user manuals, training material, etc.), the user must be able to successfully perform the specified tasks. Second, the interface and related information must be succinct--that is, the amount of information (mode indications, mode buttons, parameter settings, etc.) presented to the user must be reduced (abstracted) to the minimum necessary. A step-by-step procedure for generating the information content of the interface that is both correct and succinct is presented and then explained and illustrated via two examples. Every user interface is an abstract description of the underlying system. The correspondence between the abstracted information presented to the user and the underlying behavior of a given machine can be analyzed and addressed formally. The procedure for generating the information content of user interfaces can be automated, and a software tool for its implementation has been developed. Potential application areas include adaptive interface systems and customized/personalized interfaces.
ERGONOMICS ABSTRACTS 48983-49619.
ERIC Educational Resources Information Center
Ministry of Technology, London (England). Warren Spring Lab.
THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…
Towards a Better Distributed Framework for Learning Big Data
2017-06-14
UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This work aimed at solving issues in distributed machine learning. The PI’s team proposed...communication load. Finally, the team proposed the parallel least-squares policy iteration (parallel LSPI) to parallelize a reinforcement policy learning. 15
Ji, Xiaonan; Yen, Po-Yin
2015-08-31
Systematic reviews and their implementation in practice provide high quality evidence for clinical practice but are both time and labor intensive due to the large number of articles. Automatic text classification has proven to be instrumental in identifying relevant articles for systematic reviews. Existing approaches use machine learning model training to generate classification algorithms for the article screening process but have limitations. We applied a network approach to assist in the article screening process for systematic reviews using predetermined article relationships (similarity). The article similarity metric is calculated using the MEDLINE elements title (TI), abstract (AB), medical subject heading (MH), author (AU), and publication type (PT). We used an article network to illustrate the concept of article relationships. Using the concept, each article can be modeled as a node in the network and the relationship between 2 articles is modeled as an edge connecting them. The purpose of our study was to use the article relationship to facilitate an interactive article recommendation process. We used 15 completed systematic reviews produced by the Drug Effectiveness Review Project and demonstrated the use of article networks to assist article recommendation. We evaluated the predictive performance of MEDLINE elements and compared our approach with existing machine learning model training approaches. The performance was measured by work saved over sampling at 95% recall (WSS95) and the F-measure (F1). We also used repeated analysis over variance and Hommel's multiple comparison adjustment to demonstrate statistical evidence. We found that although there is no significant difference across elements (except AU), TI and AB have better predictive capability in general. Collaborative elements bring performance improvement in both F1 and WSS95. With our approach, a simple combination of TI+AB+PT could achieve a WSS95 performance of 37%, which is competitive to traditional machine learning model training approaches (23%-41% WSS95). We demonstrated a new approach to assist in labor intensive systematic reviews. Predictive ability of different elements (both single and composited) was explored. Without using model training approaches, we established a generalizable method that can achieve a competitive performance.
Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.
Chowdhury, Debashish
2013-06-04
A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Huang, Jen-Ching; Weng, Yung-Jin
2014-01-01
This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.
Cang, Zixuan; Wei, Guo-Wei
2018-02-01
Protein-ligand binding is a fundamental biological process that is paramount to many other biological processes, such as signal transduction, metabolic pathways, enzyme construction, cell secretion, and gene expression. Accurate prediction of protein-ligand binding affinities is vital to rational drug design and the understanding of protein-ligand binding and binding induced function. Existing binding affinity prediction methods are inundated with geometric detail and involve excessively high dimensions, which undermines their predictive power for massive binding data. Topology provides the ultimate level of abstraction and thus incurs too much reduction in geometric information. Persistent homology embeds geometric information into topological invariants and bridges the gap between complex geometry and abstract topology. However, it oversimplifies biological information. This work introduces element specific persistent homology (ESPH) or multicomponent persistent homology to retain crucial biological information during topological simplification. The combination of ESPH and machine learning gives rise to a powerful paradigm for macromolecular analysis. Tests on 2 large data sets indicate that the proposed topology-based machine-learning paradigm outperforms other existing methods in protein-ligand binding affinity predictions. ESPH reveals protein-ligand binding mechanism that can not be attained from other conventional techniques. The present approach reveals that protein-ligand hydrophobic interactions are extended to 40Å away from the binding site, which has a significant ramification to drug and protein design. Copyright © 2017 John Wiley & Sons, Ltd.
High fidelity 3-dimensional models of beam-electron cloud interactions in circular accelerators
NASA Astrophysics Data System (ADS)
Feiz Zarrin Ghalam, Ali
Electron cloud is a low-density electron profile created inside the vacuum chamber of circular machines with positively charged beams. Electron cloud limits the peak current of the beam and degrades the beams' quality through luminosity degradation, emittance growth and head to tail or bunch to bunch instability. The adverse effects of electron cloud on long-term beam dynamics becomes more and more important as the beams go to higher and higher energies. This problem has become a major concern in many future circular machines design like the Large Hadron Collider (LHC) under construction at European Center for Nuclear Research (CERN). Due to the importance of the problem several simulation models have been developed to model long-term beam-electron cloud interaction. These models are based on "single kick approximation" where the electron cloud is assumed to be concentrated at one thin slab around the ring. While this model is efficient in terms of computational costs, it does not reflect the real physical situation as the forces from electron cloud to the beam are non-linear contrary to this model's assumption. To address the existing codes limitation, in this thesis a new model is developed to continuously model the beam-electron cloud interaction. The code is derived from a 3-D parallel Particle-In-Cell (PIC) model (QuickPIC) originally used for plasma wakefield acceleration research. To make the original model fit into circular machines environment, betatron and synchrotron equations of motions have been added to the code, also the effect of chromaticity, lattice structure have been included. QuickPIC is then benchmarked against one of the codes developed based on single kick approximation (HEAD-TAIL) for the transverse spot size of the beam in CERN-LHC. The growth predicted by QuickPIC is less than the one predicted by HEAD-TAIL. The code is then used to investigate the effect of electron cloud image charges on the long-term beam dynamics, particularly on the transverse tune shift of the beam at CERN Super Proton Synchrotron (SPS) ring. The force from the electron cloud image charges on the beam cancels the force due to cloud compression formed on the beam axis and therefore the tune shift is mainly due to the uniform electron cloud density. (Abstract shortened by UMI.)
Cylindrical Vector Beams for Rapid Polarization-Dependent Measurements in Atomic Systems
2011-12-05
www.opticsinfobase.org/abstract.cfm?URI=oe-18-24-25035. 16. S. Tripathi and K. C. Toussaint, Jr., “Rapid Mueller matrix polarimetry based on parallelized...optical trapping [11], atom guiding [12], laser machining [13], charged particle acceleration [14,15], and polarimetry [16]. Yet despite numerous
Ontology-Based Learner Categorization through Case Based Reasoning and Fuzzy Logic
ERIC Educational Resources Information Center
Sarwar, Sohail; García-Castro, Raul; Qayyum, Zia Ul; Safyan, Muhammad; Munir, Rana Faisal
2017-01-01
Learner categorization has a pivotal role in making e-learning systems a success. However, learner characteristics exploited at abstract level of granularity by contemporary techniques cannot categorize the learners effectively. In this paper, an architecture of e-learning framework has been presented that exploits the machine learning based…
Destruction of Knowledge: A Study of Journal Mutilation at a Large University Library.
ERIC Educational Resources Information Center
Constantinou, Constantia
1995-01-01
A study of 1264 incidents of journal mutilation at New York University indicates no relationship between the availability of indexing and abstracting services on CD-ROM databases and mutilation. Recommends posting warnings; raising awareness; providing adequate photocopiers, change, and vendor card machines; announcing closing time; encouraging…
Types for Correct Concurrent API Usage
2010-12-01
unique, full Here g is the state guarantee and A is the current abstract state of the object referenced by r. The ⊗ symbol is called the “ tensor ...to discover resources on a heterogeneous network. Votebox is an open-source implementation of software for voting machines. The Blocking queuemethod
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Swan, Anna Louise; Mobasheri, Ali; Allaway, David; Liddell, Susan
2013-01-01
Abstract Mass spectrometry is an analytical technique for the characterization of biological samples and is increasingly used in omics studies because of its targeted, nontargeted, and high throughput abilities. However, due to the large datasets generated, it requires informatics approaches such as machine learning techniques to analyze and interpret relevant data. Machine learning can be applied to MS-derived proteomics data in two ways. First, directly to mass spectral peaks and second, to proteins identified by sequence database searching, although relative protein quantification is required for the latter. Machine learning has been applied to mass spectrometry data from different biological disciplines, particularly for various cancers. The aims of such investigations have been to identify biomarkers and to aid in diagnosis, prognosis, and treatment of specific diseases. This review describes how machine learning has been applied to proteomics tandem mass spectrometry data. This includes how it can be used to identify proteins suitable for use as biomarkers of disease and for classification of samples into disease or treatment groups, which may be applicable for diagnostics. It also includes the challenges faced by such investigations, such as prediction of proteins present, protein quantification, planning for the use of machine learning, and small sample sizes. PMID:24116388
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-01-01
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699
Investigation of approximate models of experimental temperature characteristics of machines
NASA Astrophysics Data System (ADS)
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
An introduction and overview of machine learning in neurosurgical care.
Senders, Joeky T; Zaki, Mark M; Karhade, Aditya V; Chang, Bliss; Gormley, William B; Broekman, Marike L; Smith, Timothy R; Arnaout, Omar
2018-01-01
Machine learning (ML) is a branch of artificial intelligence that allows computers to learn from large complex datasets without being explicitly programmed. Although ML is already widely manifest in our daily lives in various forms, the considerable potential of ML has yet to find its way into mainstream medical research and day-to-day clinical care. The complex diagnostic and therapeutic modalities used in neurosurgery provide a vast amount of data that is ideally suited for ML models. This systematic review explores ML's potential to assist and improve neurosurgical care. A systematic literature search was performed in the PubMed and Embase databases to identify all potentially relevant studies up to January 1, 2017. All studies were included that evaluated ML models assisting neurosurgical treatment. Of the 6,402 citations identified, 221 studies were selected after subsequent title/abstract and full-text screening. In these studies, ML was used to assist surgical treatment of patients with epilepsy, brain tumors, spinal lesions, neurovascular pathology, Parkinson's disease, traumatic brain injury, and hydrocephalus. Across multiple paradigms, ML was found to be a valuable tool for presurgical planning, intraoperative guidance, neurophysiological monitoring, and neurosurgical outcome prediction. ML has started to find applications aimed at improving neurosurgical care by increasing the efficiency and precision of perioperative decision-making. A thorough validation of specific ML models is essential before implementation in clinical neurosurgical care. To bridge the gap between research and clinical care, practical and ethical issues should be considered parallel to the development of these techniques.
Niazi, Muhammad K. K.; Dhulekar, Nimit; Schmidt, Diane; Major, Samuel; Cooper, Rachel; Abeijon, Claudia; Gatti, Daniel M.; Kramnik, Igor; Yener, Bulent; Gurcan, Metin; Beamer, Gillian
2015-01-01
ABSTRACT Pulmonary tuberculosis (TB) is caused by Mycobacterium tuberculosis in susceptible humans. Here, we infected Diversity Outbred (DO) mice with ∼100 bacilli by aerosol to model responses in a highly heterogeneous population. Following infection, ‘supersusceptible’, ‘susceptible’ and ‘resistant’ phenotypes emerged. TB disease (reduced survival, weight loss, high bacterial load) correlated strongly with neutrophils, neutrophil chemokines, tumor necrosis factor (TNF) and cell death. By contrast, immune cytokines were weak correlates of disease. We next applied statistical and machine learning approaches to our dataset of cytokines and chemokines from lungs and blood. Six molecules from the lung: TNF, CXCL1, CXCL2, CXCL5, interferon-γ (IFN-γ), interleukin 12 (IL-12); and two molecules from blood – IL-2 and TNF – were identified as being important by applying both statistical and machine learning methods. Using molecular features to generate tree classifiers, CXCL1, CXCL2 and CXCL5 distinguished four classes (supersusceptible, susceptible, resistant and non-infected) from each other with approximately 77% accuracy using completely independent experimental data. By contrast, models based on other molecules were less accurate. Low to no IFN-γ, IL-12, IL-2 and IL-10 successfully discriminated non-infected mice from infected mice but failed to discriminate disease status amongst supersusceptible, susceptible and resistant M.-tuberculosis-infected DO mice. Additional analyses identified CXCL1 as a promising peripheral biomarker of disease and of CXCL1 production in the lungs. From these results, we conclude that: (1) DO mice respond variably to M. tuberculosis infection and will be useful to identify pathways involving necrosis and neutrophils; (2) data from DO mice is suited for machine learning methods to build, validate and test models with independent data based solely on molecular biomarkers; (3) low levels of immunological cytokines best indicate a lack of exposure to M. tuberculosis but cannot distinguish infection from disease. PMID:26204894
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Machine learning for classifying tuberculosis drug-resistance from DNA sequencing data
Yang, Yang; Niehaus, Katherine E; Walker, Timothy M; Iqbal, Zamin; Walker, A Sarah; Wilson, Daniel J; Peto, Tim E A; Crook, Derrick W; Smith, E Grace; Zhu, Tingting; Clifton, David A
2018-01-01
Abstract Motivation Correct and rapid determination of Mycobacterium tuberculosis (MTB) resistance against available tuberculosis (TB) drugs is essential for the control and management of TB. Conventional molecular diagnostic test assumes that the presence of any well-studied single nucleotide polymorphisms is sufficient to cause resistance, which yields low sensitivity for resistance classification. Summary Given the availability of DNA sequencing data from MTB, we developed machine learning models for a cohort of 1839 UK bacterial isolates to classify MTB resistance against eight anti-TB drugs (isoniazid, rifampicin, ethambutol, pyrazinamide, ciprofloxacin, moxifloxacin, ofloxacin, streptomycin) and to classify multi-drug resistance. Results Compared to previous rules-based approach, the sensitivities from the best-performing models increased by 2-4% for isoniazid, rifampicin and ethambutol to 97% (P < 0.01), respectively; for ciprofloxacin and multi-drug resistant TB, they increased to 96%. For moxifloxacin and ofloxacin, sensitivities increased by 12 and 15% from 83 and 81% based on existing known resistance alleles to 95% and 96% (P < 0.01), respectively. Particularly, our models improved sensitivities compared to the previous rules-based approach by 15 and 24% to 84 and 87% for pyrazinamide and streptomycin (P < 0.01), respectively. The best-performing models increase the area-under-the-ROC curve by 10% for pyrazinamide and streptomycin (P < 0.01), and 4–8% for other drugs (P < 0.01). Availability and implementation The details of source code are provided at http://www.robots.ox.ac.uk/~davidc/code.php. Contact david.clifton@eng.ox.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29240876
Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models.
AlDahoul, Nouar; Md Sabri, Aznul Qalid; Mansoor, Ali Mohammed
2018-01-01
Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
Three-dimensional eddy current solution of a polyphase machine test model (abstract)
NASA Astrophysics Data System (ADS)
Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado
1994-05-01
This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.
Forecasting Significant Societal Events Using The Embers Streaming Predictive Analytics System
Katz, Graham; Summers, Kristen; Ackermann, Chris; Zavorin, Ilya; Lim, Zunsik; Muthiah, Sathappan; Butler, Patrick; Self, Nathan; Zhao, Liang; Lu, Chang-Tien; Khandpur, Rupinder Paul; Fayed, Youssef; Ramakrishnan, Naren
2014-01-01
Abstract Developed under the Intelligence Advanced Research Project Activity Open Source Indicators program, Early Model Based Event Recognition using Surrogates (EMBERS) is a large-scale big data analytics system for forecasting significant societal events, such as civil unrest events on the basis of continuous, automated analysis of large volumes of publicly available data. It has been operational since November 2012 and delivers approximately 50 predictions each day for countries of Latin America. EMBERS is built on a streaming, scalable, loosely coupled, shared-nothing architecture using ZeroMQ as its messaging backbone and JSON as its wire data format. It is deployed on Amazon Web Services using an entirely automated deployment process. We describe the architecture of the system, some of the design tradeoffs encountered during development, and specifics of the machine learning models underlying EMBERS. We also present a detailed prospective evaluation of EMBERS in forecasting significant societal events in the past 2 years. PMID:25553271
Ferrigno, C.F.
1986-01-01
Machine-readable files were developed for the High Plains Regional Aquifer-System Analysis project are stored on two magnetic tapes available from the U.S. Geological Survey. The first tape contains computer programs that were used to prepare, store, retrieve, organize, and preserve the areal interpretive data collected by the project staff. The second tape contains 134 data files that can be divided into five general classes: (1) Aquifer geometry data, (2) aquifer and water characteristics , (3) water levels, (4) climatological data, and (5) land use and water use data. (Author 's abstract)
Wind energy utilization: A bibliography with abstracts - Cumulative volume 1944/1974
NASA Technical Reports Server (NTRS)
1975-01-01
Bibliography, up to 1974 inclusive, of articles and books on utilization of wind power in energy generation. Worldwide literature is surveyed, and short abstracts are provided in many cases. The citations are grouped by subject: (1) general; (2) utilization; (3) wind power plants; (4) wind power generators (rural, synchronous, remote station); (5) wind machines (motors, pumps, turbines, windmills, home-built); (6) wind data and properties; (7) energy storage; and (8) related topics (control and regulation devices, wind measuring devices, blade design and rotors, wind tunnel simulation, aerodynamics). Gross-referencing is aided by indexes of authors, corporate sources, titles, and keywords.
Decomposition of the compound Atwood machine
NASA Astrophysics Data System (ADS)
Lopes Coelho, R.
2017-11-01
Non-standard solving strategies for the compound Atwood machine problem have been proposed. The present strategy is based on a very simple idea. Taking an Atwood machine and replacing one of its bodies by another Atwood machine, we have a compound machine. As this operation can be repeated, we can construct any compound Atwood machine. This rule of construction is transferred to a mathematical model, whereby the equations of motion are obtained. The only difference between the machine and its model is that instead of pulleys and bodies, we have reference frames that move solidarily with these objects. This model provides us with the accelerations in the non-inertial frames of the bodies, which we will use to obtain the equations of motion. This approach to the problem will be justified by the Lagrange method and exemplified by machines with six and eight bodies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
Pfeiffenberger, Erik; Chaleil, Raphael A.G.; Moal, Iain H.
2017-01-01
ABSTRACT Reliable identification of near‐native poses of docked protein–protein complexes is still an unsolved problem. The intrinsic heterogeneity of protein–protein interactions is challenging for traditional biophysical or knowledge based potentials and the identification of many false positive binding sites is not unusual. Often, ranking protocols are based on initial clustering of docked poses followed by the application of an energy function to rank each cluster according to its lowest energy member. Here, we present an approach of cluster ranking based not only on one molecular descriptor (e.g., an energy function) but also employing a large number of descriptors that are integrated in a machine learning model, whereby, an extremely randomized tree classifier based on 109 molecular descriptors is trained. The protocol is based on first locally enriching clusters with additional poses, the clusters are then characterized using features describing the distribution of molecular descriptors within the cluster, which are combined into a pairwise cluster comparison model to discriminate near‐native from incorrect clusters. The results show that our approach is able to identify clusters containing near‐native protein–protein complexes. In addition, we present an analysis of the descriptors with respect to their power to discriminate near native from incorrect clusters and how data transformations and recursive feature elimination can improve the ranking performance. Proteins 2017; 85:528–543. © 2016 Wiley Periodicals, Inc. PMID:27935158
Vaughan, Adam; Bohac, Stanislav V
2015-10-01
Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine combustion timing predictions must contend with non-linear chemistry, non-linear physics, period doubling bifurcation(s), turbulent mixing, model parameters that can drift day-to-day, and air-fuel mixture state information that cannot typically be resolved on a cycle-to-cycle basis, especially during transients. In previous work, an abstract cycle-to-cycle mapping function coupled with ϵ-Support Vector Regression was shown to predict experimentally observed cycle-to-cycle combustion timing over a wide range of engine conditions, despite some of the aforementioned difficulties. The main limitation of the previous approach was that a partially acasual randomly sampled training dataset was used to train proof of concept offline predictions. The objective of this paper is to address this limitation by proposing a new online adaptive Extreme Learning Machine (ELM) extension named Weighted Ring-ELM. This extension enables fully causal combustion timing predictions at randomly chosen engine set points, and is shown to achieve results that are as good as or better than the previous offline method. The broader objective of this approach is to enable a new class of real-time model predictive control strategies for high variability HCCI and, ultimately, to bring HCCI's low engine-out NOx and reduced CO2 emissions to production engines. Copyright © 2015 Elsevier Ltd. All rights reserved.
ABSTRACT: There are thousands of environmental chemicals subject to regulatory decisions for endocrine disrupting potential. A promising approach to manage this large universe of untested chemicals is to use a prioritization filter that combines in vitro assays with in silico QSA...
Whet Students' Appetites with Food-Related Drafting Project
ERIC Educational Resources Information Center
Pucillo, John M.
2010-01-01
Students sometimes find introductory drafting and design a boring subject. They must learn the basic skills necessary for drafting and architecture and this may require repetition in order to reinforce those skills. One way to keep students interested is to have them draw objects they encounter in their own lives instead of abstract machine parts…
PASCAL Data Base: File Description and On Line Access on ESA/IRS.
ERIC Educational Resources Information Center
Pelissier, Denise
This report describes the PASCAL database, a machine readable version of the French abstract journal Bulletin Signaletique, which allows use of the file for (1) batch and online retrieval of information, (2) selective dissemination of information, and (3) publishing of the 50 sections of Bulletin Signaletique. The system, which covers nine…
Short guide to SDI profiling at ORNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomerance, H.S.
1976-06-01
ORNL has machine-searchable data bases that correspond to printed indexes and abstracts. This guide describes the peculiarities of those several data bases and the conventions of the ORNL search system so that users can write their own queries or search profiles and can interpret the part of the output that is encoded.
2017-11-01
Finite State Machine ............................................... 21 9 Main Ontological Concepts for Representing Structure of a Multi -Agent...19 NetLogo Simulation of persistent surveillance of circular plume by 4 UAVs ........................36 20 Flocking Emergent Behaviors in Multi -UAV...Region) - Undesirable Group Formation ................................................................................... 40 24 Two UAVs Moving in
Vieira, Sandra; Pinaya, Walter H L; Mechelli, Andrea
2017-03-01
Deep learning (DL) is a family of machine learning methods that has gained considerable attention in the scientific community, breaking benchmark records in areas such as speech and visual recognition. DL differs from conventional machine learning methods by virtue of its ability to learn the optimal representation from the raw data through consecutive nonlinear transformations, achieving increasingly higher levels of abstraction and complexity. Given its ability to detect abstract and complex patterns, DL has been applied in neuroimaging studies of psychiatric and neurological disorders, which are characterised by subtle and diffuse alterations. Here we introduce the underlying concepts of DL and review studies that have used this approach to classify brain-based disorders. The results of these studies indicate that DL could be a powerful tool in the current search for biomarkers of psychiatric and neurologic disease. We conclude our review by discussing the main promises and challenges of using DL to elucidate brain-based disorders, as well as possible directions for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Task-focused modeling in automated agriculture
NASA Astrophysics Data System (ADS)
Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack
1993-01-01
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
Human factors model concerning the man-machine interface of mining crewstations
NASA Technical Reports Server (NTRS)
Rider, James P.; Unger, Richard L.
1989-01-01
The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
Kant, Vivek
2017-03-01
Jens Rasmussen's contribution to the field of human factors and ergonomics has had a lasting impact. Six prominent interrelated themes can be extracted from his research between 1961 and 1986. These themes form the basis of an engineering epistemology which is best manifested by his abstraction hierarchy. Further, Rasmussen reformulated technical reliability using systems language to enable a proper human-machine fit. To understand the concept of human-machine fit, he included the operator as a central component in the system to enhance system safety. This change resulted in the application of a qualitative and categorical approach for human-machine interaction design. Finally, Rasmussen's insistence on a working philosophy of systems design as being a joint responsibility of operators and designers provided the basis for averting errors and ensuring safe and correct system functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-06-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA.
Dolev, Danny; Függer, Matthias; Posch, Markus; Schmid, Ulrich; Steininger, Andreas; Lenzen, Christoph
2014-01-01
We present the first implementation of a distributed clock generation scheme for Systems-on-Chip that recovers from an unbounded number of arbitrary transient faults despite a large number of arbitrary permanent faults. We devise self-stabilizing hardware building blocks and a hybrid synchronous/asynchronous state machine enabling metastability-free transitions of the algorithm's states. We provide a comprehensive modeling approach that permits to prove, given correctness of the constructed low-level building blocks, the high-level properties of the synchronization algorithm (which have been established in a more abstract model). We believe this approach to be of interest in its own right, since this is the first technique permitting to mathematically verify, at manageable complexity, high-level properties of a fault-prone system in terms of its very basic components. We evaluate a prototype implementation, which has been designed in VHDL, using the Petrify tool in conjunction with some extensions, and synthesized for an Altera Cyclone FPGA. PMID:26516290
Two Theories Are Better Than One
NASA Astrophysics Data System (ADS)
Jones, Robert
2008-03-01
All knowledge is of an approximate character (B. Russell, Human Knowledge, 1948, pg 497 and 507). Our formalisms abstract, idealize, and simplify (R. L. Epstein, Propositional Logics, 2001, Ch XI and E. Bender, An Intro. to Math. Modeling, 1978, pg v and 2). Each formalism is an idealization, often times approximating in its own DIFFERENT ways, each offering somewhat different coverage of the domain. Having MULTIPLE overlaping theories of a knowledge domain is then better than having just one theory (R. Jones, APS general meeting, April 2004). Theories are not unique (T. M. Mitchell, Machine Learning, 1997, pg 65-66 and Cooper, Machine Learning, vol. 9, 1992, pg 319). In the future every field will possess multiple theories of its domain and scientific work and engineering will be performed based on the ensemble predictions of ALL of these. In some cases the theories may be quite divergent, differing greatly one from the other. This idea can be considered an extension of Bohr's notion of complementarity, ``...different experimental arrangements...described by different physical concepts...together and only together exhaust the definable information we can obtain about the object.'' (H. J. Folse, The Philosophy of Neils Bohr, 1985, pg 238)
Generative Modeling for Machine Learning on the D-Wave
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thulasidasan, Sunil
These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.
Yu, Wei; Clyne, Melinda; Dolan, Siobhan M; Yesupriya, Ajay; Wulf, Anja; Liu, Tiebin; Khoury, Muin J; Gwinn, Marta
2008-01-01
Background Synthesis of data from published human genetic association studies is a critical step in the translation of human genome discoveries into health applications. Although genetic association studies account for a substantial proportion of the abstracts in PubMed, identifying them with standard queries is not always accurate or efficient. Further automating the literature-screening process can reduce the burden of a labor-intensive and time-consuming traditional literature search. The Support Vector Machine (SVM), a well-established machine learning technique, has been successful in classifying text, including biomedical literature. The GAPscreener, a free SVM-based software tool, can be used to assist in screening PubMed abstracts for human genetic association studies. Results The data source for this research was the HuGE Navigator, formerly known as the HuGE Pub Lit database. Weighted SVM feature selection based on a keyword list obtained by the two-way z score method demonstrated the best screening performance, achieving 97.5% recall, 98.3% specificity and 31.9% precision in performance testing. Compared with the traditional screening process based on a complex PubMed query, the SVM tool reduced by about 90% the number of abstracts requiring individual review by the database curator. The tool also ascertained 47 articles that were missed by the traditional literature screening process during the 4-week test period. We examined the literature on genetic associations with preterm birth as an example. Compared with the traditional, manual process, the GAPscreener both reduced effort and improved accuracy. Conclusion GAPscreener is the first free SVM-based application available for screening the human genetic association literature in PubMed with high recall and specificity. The user-friendly graphical user interface makes this a practical, stand-alone application. The software can be downloaded at no charge. PMID:18430222
Architectures for intelligent machines
NASA Technical Reports Server (NTRS)
Saridis, George N.
1991-01-01
The theory of intelligent machines has been recently reformulated to incorporate new architectures that are using neural and Petri nets. The analytic functions of an intelligent machine are implemented by intelligent controls, using entropy as a measure. The resulting hierarchical control structure is based on the principle of increasing precision with decreasing intelligence. Each of the three levels of the intelligent control is using different architectures, in order to satisfy the requirements of the principle: the organization level is moduled after a Boltzmann machine for abstract reasoning, task planning and decision making; the coordination level is composed of a number of Petri net transducers supervised, for command exchange, by a dispatcher, which also serves as an interface to the organization level; the execution level, include the sensory, planning for navigation and control hardware which interacts one-to-one with the appropriate coordinators, while a VME bus provides a channel for database exchange among the several devices. This system is currently implemented on a robotic transporter, designed for space construction at the CIRSSE laboratories at the Rensselaer Polytechnic Institute. The progress of its development is reported.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Experimental and mathematical modeling of flow in headboxes
NASA Astrophysics Data System (ADS)
Shariati, Mohammad Reza
The fluid flow patterns in a paper-machine headbox have a strong influence on the quality of the paper produced by the machine. Due to increasing demand for high quality paper there is a need to investigate the details of the fluid flow in the paper machine headbox. The objective of this thesis is to use experimental and computational methods of modeling the flow inside a typical headbox in order to evaluate and understand the mean flow patterns and turbulence created there. In particular, spatial variations of the mean flow and of the turbulence quantities and the turbulence generated secondary flows are studied. In addition to the flow inside the headbox, the flow leaving the slice is also modeled both experimentally and computationally. Comparison of the experimental and numerical results indicated that streamwise mean components of the velocities in the headbox are predicted well by all the turbulence models considered in this study. However, the standard k-epsilon model and the algebraic turbulence models fail to predict the turbulence quantities accurately. Standard k-epsilon-model also fails to predict the direction and magnitude of the secondary flows. Significant improvements in the k-epsilon model predictions were achieved when the turbulence production term was artificially set to zero. This is justified by observations of the turbulent velocities from the experiments and by a consideration of the form of the kinetic energy equation. A better estimation of the Reynolds normal stress distribution and the degree of anisotropy of turbulence was achieved using the Reynolds stress turbulence model. Careful examination of the measured turbulence velocity results shows that after the initial decay of the turbulence in the headbox, there is a short region close to the exit, but inside the headbox, where the turbulent kinetic energy actually increases as a result of the distortion imposed by the contraction. The turbulence energy quickly resumes its decay in the free jet after the headbox. The overall conclusion from this thesis, obtained by comparison of experimental and computational simulations of the flow in a headbox, is that numerical simulations show great promise for predictions of headbox flows. Mean velocities and turbulence characteristics can now be predicted with fair accuracy by careful use of specialized turbulence models. Standard engineering turbulence models, such as the k-epsilon model and its immediate relatives, should not be used to estimate the turbulence quantities essential for predicting pulp fiber dispersion within the contracting region and free jet of a headbox, particularly when the overall contraction ratio is greater than about five. (Abstract shortened by UMI.)
An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu
2016-01-01
Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future proteomics research. For facilitating extensive studies for future proteomics research, we developed a freely available web server called RVM‐BiGP‐PPIs in Hypertext Preprocessor (PHP) for predicting PPIs. The web server including source code and the datasets are available at http://219.219.62.123:8888/BiGP/. PMID:27452983
Olm, Matthew R.; Morowitz, Michael J.
2018-01-01
ABSTRACT Antibiotic resistance in pathogens is extensively studied, and yet little is known about how antibiotic resistance genes of typical gut bacteria influence microbiome dynamics. Here, we leveraged genomes from metagenomes to investigate how genes of the premature infant gut resistome correspond to the ability of bacteria to survive under certain environmental and clinical conditions. We found that formula feeding impacts the resistome. Random forest models corroborated by statistical tests revealed that the gut resistome of formula-fed infants is enriched in class D beta-lactamase genes. Interestingly, Clostridium difficile strains harboring this gene are at higher abundance in formula-fed infants than C. difficile strains lacking this gene. Organisms with genes for major facilitator superfamily drug efflux pumps have higher replication rates under all conditions, even in the absence of antibiotic therapy. Using a machine learning approach, we identified genes that are predictive of an organism’s direction of change in relative abundance after administration of vancomycin and cephalosporin antibiotics. The most accurate results were obtained by reducing annotated genomic data to five principal components classified by boosted decision trees. Among the genes involved in predicting whether an organism increased in relative abundance after treatment are those that encode subclass B2 beta-lactamases and transcriptional regulators of vancomycin resistance. This demonstrates that machine learning applied to genome-resolved metagenomics data can identify key genes for survival after antibiotics treatment and predict how organisms in the gut microbiome will respond to antibiotic administration. IMPORTANCE The process of reconstructing genomes from environmental sequence data (genome-resolved metagenomics) allows unique insight into microbial systems. We apply this technique to investigate how the antibiotic resistance genes of bacteria affect their ability to flourish in the gut under various conditions. Our analysis reveals that strain-level selection in formula-fed infants drives enrichment of beta-lactamase genes in the gut resistome. Using genomes from metagenomes, we built a machine learning model to predict how organisms in the gut microbial community respond to perturbation by antibiotics. This may eventually have clinical applications. PMID:29359195
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
The Efficacy of Machine Learning Programs for Navy Manpower Analysis
1993-03-01
This thesis investigated the efficacy of two machine learning programs for Navy manpower analysis. Two machine learning programs, AIM and IXL, were...to generate models from the two commercial machine learning programs. Using a held out sub-set of the data the capabilities of the three models were...partial effects. The author recommended further investigation of AIM’s capabilities, and testing in an operational environment.... Machine learning , AIM, IXL.
A comparison of machine learning and Bayesian modelling for molecular serotyping.
Newton, Richard; Wernisch, Lorenz
2017-08-11
Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Learning About Climate and Atmospheric Models Through Machine Learning
NASA Astrophysics Data System (ADS)
Lucas, D. D.
2017-12-01
From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ERIC Educational Resources Information Center
International Business Machines Corp., Milford, CT. Academic Information Systems.
This agenda lists activities scheduled for the second IBM (International Business Machines) Academic Information Systems University AEP (Advanced Education Projects) Conference, which was designed to afford the universities participating in the IBM-sponsored AEPs an opportunity to demonstrate their AEP experiments in educational computing. In…
Technical Reliability Studies. EOS/ESD Technology Abstracts
1981-01-01
MECHANISMS MELAMINE MESFETS MICROWAVE MIS 15025 AUTOMATIC MACHINE PRECAUTIONS FOR HOS/OiOS 15006 INSTRUCTIONS FOR INSTALLATION AND...ELIMINATION OF EOS INDUCED SECONDARY FAILURE MECHANISMS 15000 USE OF MELAMINE WORK-SURFACE FOR ESD POTENTIAL BLEED OFF 16141 MICROWAVE NANOSECOND... microwave devices, optoelectronics, and selected nonelectronic parts employed in military, space and commercial applications. In addition, a System
Evans, Lyn
2018-05-23
Abstract: From the civil engineering, to the manufacturing of the various magnet types, each building block of this extraordinary machine required ambitious leaps in innovation. This lecture will review the history of the LHC project, focusing on the many challenges -- scientific, technological, managerial -- that had to be met during the various phases of R&D, industrialization, construction, installation and commissioning.
Implications of Gendered Technology for Art Education: The Case Study of a Male Drawing Machine.
ERIC Educational Resources Information Center
Morbey, Mary Leigh
Opening with a discussion of AARON, an artificial intelligence symbol system that is used to generate computer drawings, this document makes the argument that AARON is based upon a way of knowing that is abstract, analytical, rationalist and thus representative of the dominant, western, male philosophical tradition. Male bias permeates the field…
Using the global positioning system to map disturbance patterns of forest harvesting machinery
T.P. McDonald; E.A. Carter; S.E. Taylor
2002-01-01
Abstract: A method was presented to transform sampled machine positional data obtained from a global positioning system (GPS) receiver into a two-dimensional raster map of number of passes as a function of location. The effect of three sources of error in the transformation process were investigated: path sampling rate (receiver sampling frequency);...
NASA Astrophysics Data System (ADS)
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strout, Michelle
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programsmore » through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.« less
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Automation of energy demand forecasting
NASA Astrophysics Data System (ADS)
Siddique, Sanzad
Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.
Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models
2015-09-12
AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0239 5c. PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY
Cape Blanco wind farm feasibility study
NASA Astrophysics Data System (ADS)
1987-11-01
The Cape Blanco Wind Farm (CBWF) Feasibility Study was undertaken as a prototype for determining the feasibility of proposals for wind energy projects at Northwest sites. It was intended to test for conditions under which wind generation of electricity could be commercially feasible, not by another abstract survey of alternative technologies, but rather through a site-specific, machine-specific analysis of one proposal. Some of the study findings would be most pertinent to the Cape Blanco site - local problems require local solutions. Other findings would be readily applicable to other sites and other machines, and study methodologies would be designed to be modified for appraisal of other proposals. This volume discusses environmental, economic, and technical issues of the Wind Farm.
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2015-12-01
Prognostics is much efficient to achieve zero-downtime performance, maximum productivity and proactive maintenance of machines. Prognostics intends to assess and predict the time evolution of machine health degradation so that machine failures can be predicted and prevented. A novel prognostics system is developed based on the data-model-fusion scheme using the Bayesian inference-based self-organizing map (SOM) and an integration of logistic regression (LR) and high-order particle filtering (HOPF). In this prognostics system, a baseline SOM is constructed to model the data distribution space of healthy machine under an assumption that predictable fault patterns are not available. Bayesian inference-based probability (BIP) derived from the baseline SOM is developed as a quantification indication of machine health degradation. BIP is capable of offering failure probability for the monitored machine, which has intuitionist explanation related to health degradation state. Based on those historic BIPs, the constructed LR and its modeling noise constitute a high-order Markov process (HOMP) to describe machine health propagation. HOPF is used to solve the HOMP estimation to predict the evolution of the machine health in the form of a probability density function (PDF). An on-line model update scheme is developed to adapt the Markov process changes to machine health dynamics quickly. The experimental results on a bearing test-bed illustrate the potential applications of the proposed system as an effective and simple tool for machine health prognostics.
Li, Ping; Schloss, Benjamin; Follmer, D Jake
2017-10-01
In this article we report a computational semantic analysis of the presidential candidates' speeches in the two major political parties in the USA. In Study One, we modeled the political semantic spaces as a function of party, candidate, and time of election, and findings revealed patterns of differences in the semantic representation of key political concepts and the changing landscapes in which the presidential candidates align or misalign with their parties in terms of the representation and organization of politically central concepts. Our models further showed that the 2016 US presidential nominees had distinct conceptual representations from those of previous election years, and these patterns did not necessarily align with their respective political parties' average representation of the key political concepts. In Study Two, structural equation modeling demonstrated that reported political engagement among voters differentially predicted reported likelihoods of voting for Clinton versus Trump in the 2016 presidential election. Study Three indicated that Republicans and Democrats showed distinct, systematic word association patterns for the same concepts/terms, which could be reliably distinguished using machine learning methods. These studies suggest that given an individual's political beliefs, we can make reliable predictions about how they understand words, and given how an individual understands those same words, we can also predict an individual's political beliefs. Our study provides a bridge between semantic space models and abstract representations of political concepts on the one hand, and the representations of political concepts and citizens' voting behavior on the other.
Modeling and simulation of five-axis virtual machine based on NX
NASA Astrophysics Data System (ADS)
Li, Xiaoda; Zhan, Xianghui
2018-04-01
Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.
NASA Astrophysics Data System (ADS)
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
Information-theoretic approach to interactive learning
NASA Astrophysics Data System (ADS)
Still, S.
2009-01-01
The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo
2017-01-01
Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024 pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314
Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.
ERIC Educational Resources Information Center
Technology Management Corp., Alexandria, VA.
A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…
Confabulation Based Sentence Completion for Machine Reading
2010-11-01
making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the...thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics...University Press, 1992. [2] H. Motoda and K. Yoshida, “Machine learning techniques to make computers easier to use,” Proceedings of the Fifteenth
Equivalent model of a dually-fed machine for electric drive control systems
NASA Astrophysics Data System (ADS)
Ostrovlyanchik, I. Yu; Popolzin, I. Yu
2018-05-01
The article shows that the mathematical model of a dually-fed machine is complicated because of the presence of a controlled voltage source in the rotor circuit. As a method of obtaining a mathematical model, the method of a generalized two-phase electric machine is applied and a rotating orthogonal coordinate system is chosen that is associated with the representing vector of a stator current. In the chosen coordinate system in the operator form the differential equations of electric equilibrium for the windings of the generalized machine (the Kirchhoff equation) are written together with the expression for the moment, which determines the electromechanical energy transformation in the machine. Equations are transformed so that they connect the currents of the windings, that determine the moment of the machine, and the voltages on these windings. The structural diagram of the machine is assigned to the written equations. Based on the written equations and accepted assumptions, expressions were obtained for the balancing the EMF of windings, and on the basis of these expressions an equivalent mathematical model of a dually-fed machine is proposed, convenient for use in electric drive control systems.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Numerical Simulation of Earth Pressure on Head Chamber of Shield Machine with FEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Shouju; Kang Chengang; Sun, Wei
2010-05-21
Model parameters of conditioned soils in head chamber of shield machine are determined based on tree-axial compression tests in laboratory. The loads acting on tunneling face are estimated according to static earth pressure principle. Based on Duncan-Chang nonlinear elastic constitutive model, the earth pressures on head chamber of shield machine are simulated in different aperture ratio cases for rotating cutterhead of shield machine. Relationship between pressure transportation factor and aperture ratio of shield machine is proposed by using aggression analysis.
On-Line Scheduling of Parallel Machines
1990-11-01
machine without losing any work; this is referred to as the preemptive model. In contrast to the nonpreemptive model which we have considered in this paper...that there exists no schedule of length d. The 2-relaxed decision procedure is as follows. Put each job into the queue of the slowest machine Mk such...in their queues . If a machine’s queue is empty it takes jobs to process from the queue of the first machine that is slower than it and that has a
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining
NASA Astrophysics Data System (ADS)
Kong, Xiangcheng
The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.
A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations
NASA Astrophysics Data System (ADS)
Tan, H.; Chandra, C. V.; Chen, H.
2016-12-01
Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.
A Review on High-Speed Machining of Titanium Alloys
NASA Astrophysics Data System (ADS)
Rahman, Mustafizur; Wang, Zhi-Gang; Wong, Yoke-San
Titanium alloys have been widely used in the aerospace, biomedical and automotive industries because of their good strength-to-weight ratio and superior corrosion resistance. However, it is very difficult to machine them due to their poor machinability. When machining titanium alloys with conventional tools, the tool wear rate progresses rapidly, and it is generally difficult to achieve a cutting speed of over 60m/min. Other types of tool materials, including ceramic, diamond, and cubic boron nitride (CBN), are highly reactive with titanium alloys at higher temperature. However, binder-less CBN (BCBN) tools, which do not have any binder, sintering agent or catalyst, have a remarkably longer tool life than conventional CBN inserts even at high cutting speeds. In order to get deeper understanding of high speed machining (HSM) of titanium alloys, the generation of mathematical models is essential. The models are also needed to predict the machining parameters for HSM. This paper aims to give an overview of recent developments in machining and HSM of titanium alloys, geometrical modeling of HSM, and cutting force models for HSM of titanium alloys.
[Anesthesia simulators and training devices].
Hartmannsgruber, M; Good, M; Carovano, R; Lampotang, S; Gravenstein, J S
1993-07-01
Simulators and training devices are used extensively by educators in 'high-tech' occupations, especially those requiring an understanding of complex systems and co-ordinated psychomotor skills. Because of advances in computer technology, anaesthetised patients can now be realistically simulated. This paper describes several training devices and a simulator currently being employed in the training of anaesthesia personnel at the University of Florida. This Gainesville Anesthesia Simulator (GAS) comprises a patient mannequin, anaesthesia gas machine, and a full set of normally operating monitoring instruments. The patient can spontaneously breathe, has audible heart and breath sounds, and palpable pulses. The mannequin contains a sophisticated lung model that consumes and eliminates gas according to physiological principles. Interconnected computers controlling the physical signs of the mannequin enable the presentation of a multitude of clinical signs. In addition, the anaesthesia machine, which is functionally intact, has hidden fault activators to challenge the user to correct equipment malfunctions. Concealed sensors monitor the users' actions and responses. A robust data acquisition and control system and a user-friendly scripting language for programming simulation scenarios are key features of GAS and make this system applicable for the training of both the beginning resident and the experienced practitioner. GAS enhances clinical education in anaesthesia by providing a non-threatening environment that fosters learning by doing. Exercises with the simulator are supported by sessions on a number of training devices. These present theoretical and practical interactive courses on the anaesthesia machine and on monitors. An extensive system, for example, introduces the student to the physics and clinical application of transoesophageal echocardiography.(ABSTRACT TRUNCATED AT 250 WORDS)
Cloud Fingerprinting: Using Clock Skews To Determine Co Location Of Virtual Machines
2016-09-01
DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Cloud computing has quickly revolutionized computing practices of organizations, to include the Department of... Cloud computing has quickly revolutionized computing practices of organizations, to in- clude the Department of Defense. However, security concerns...vi Table of Contents 1 Introduction 1 1.1 Proliferation of Cloud Computing . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement
Speech Processing and Recognition (SPaRe)
2011-01-01
results in the areas of automatic speech recognition (ASR), speech processing, machine translation (MT), natural language processing ( NLP ), and...Processing ( NLP ), Information Retrieval (IR) 16. SECURITY CLASSIFICATION OF: UNCLASSIFED 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME...Figure 9, the IOC was only expected to provide document submission and search; automatic speech recognition (ASR) for English, Spanish, Arabic , and
Fighting Through a Logistics Cyber Attack
2015-06-19
Chariot 800 - 1350 Gunpowder 1915 Machine Gun 1915 Tanks 1915 Aircraft 1935 Radar 1945 Nuclear Weapons 1960 Satellites 1989 GPS 2009 Cyber Weapon...primarily remained in the scientific and academic communities for the next 22 years ( Griffiths , 2002). The Internet as we recognize it today... Griffiths (2002), defines the Web as an abstract space information containing hyperlinked documents and other resources, identified by their Uniformed
Literature classification for semi-automated updating of biological knowledgebases
2013-01-01
Background As the output of biological assays increase in resolution and volume, the body of specialized biological data, such as functional annotations of gene and protein sequences, enables extraction of higher-level knowledge needed for practical application in bioinformatics. Whereas common types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature. Results We defined and applied a machine learning approach for literature classification to support updating of TANTIGEN, a knowledgebase of tumor T-cell antigens. Abstracts from PubMed were downloaded and classified as either "relevant" or "irrelevant" for database update. Training and five-fold cross-validation of a k-NN classifier on 310 abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining and machine learning. The addition of such data will aid in the transition of biological databases to knowledgebases. PMID:24564403
Automated delineation of radiotherapy volumes: are we going in the right direction?
Whitfield, G A; Price, P; Price, G J; Moore, C J
2013-01-01
ABSTRACT. Rapid and accurate delineation of target volumes and multiple organs at risk, within the enduring International Commission on Radiation Units and Measurement framework, is now hugely important in radiotherapy, owing to the rapid proliferation of intensity-modulated radiotherapy and the advent of four-dimensional image-guided adaption. Nevertheless, delineation is still generally clinically performed with little if any machine assistance, even though it is both time-consuming and prone to interobserver variation. Currently available segmentation tools include those based on image greyscale interrogation, statistical shape modelling and body atlas-based methods. However, all too often these are not able to match the accuracy of the expert clinician, which remains the universally acknowledged gold standard. In this article we suggest that current methods are fundamentally limited by their lack of ability to incorporate essential human clinical decision-making into the underlying models. Hybrid techniques that utilise prior knowledge, make sophisticated use of greyscale information and allow clinical expertise to be integrated are needed. This may require a change in focus from automated segmentation to machine-assisted delineation. Similarly, new metrics of image quality reflecting fitness for purpose would be extremely valuable. We conclude that methods need to be developed to take account of the clinician's expertise and honed visual processing capabilities as much as the underlying, clinically meaningful information content of the image data being interrogated. We illustrate our observations and suggestions through our own experiences with two software tools developed as part of research council-funded projects. PMID:23239689
Wang, Zhi-Long; Zhou, Zhi-Guo; Chen, Ying; Li, Xiao-Ting; Sun, Ying-Shi
The aim of this study was to diagnose lymph node metastasis of esophageal cancer by support vector machines model based on computed tomography. A total of 131 esophageal cancer patients with preoperative chemotherapy and radical surgery were included. Various indicators (tumor thickness, tumor length, tumor CT value, total number of lymph nodes, and long axis and short axis sizes of largest lymph node) on CT images before and after neoadjuvant chemotherapy were recorded. A support vector machines model based on these CT indicators was built to predict lymph node metastasis. Support vector machines model diagnosed lymph node metastasis better than preoperative short axis size of largest lymph node on CT. The area under the receiver operating characteristic curves were 0.887 and 0.705, respectively. The support vector machine model of CT images can help diagnose lymph node metastasis in esophageal cancer with preoperative chemotherapy.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.
Exploiting the Dynamics of Soft Materials for Machine Learning
Hauser, Helmut; Li, Tao; Pfeifer, Rolf
2018-01-01
Abstract Soft materials are increasingly utilized for various purposes in many engineering applications. These materials have been shown to perform a number of functions that were previously difficult to implement using rigid materials. Here, we argue that the diverse dynamics generated by actuating soft materials can be effectively used for machine learning purposes. This is demonstrated using a soft silicone arm through a technique of multiplexing, which enables the rich transient dynamics of the soft materials to be fully exploited as a computational resource. The computational performance of the soft silicone arm is examined through two standard benchmark tasks. Results show that the soft arm compares well to or even outperforms conventional machine learning techniques under multiple conditions. We then demonstrate that this system can be used for the sensory time series prediction problem for the soft arm itself, which suggests its immediate applicability to a real-world machine learning problem. Our approach, on the one hand, represents a radical departure from traditional computational methods, whereas on the other hand, it fits nicely into a more general perspective of computation by way of exploiting the properties of physical materials in the real world. PMID:29708857
Wood, Lisa A
2016-06-01
Attending to the material discursive constructions of the patient body within cone beam computed tomography (CBCT) imaging in radiotherapy treatments, in this paper I describe how bodies and machines co-create images. Using an analytical framework inspired by Science and Technology Studies and Feminist Technoscience, I describe the interplay between machines and bodies and the implications of materialities and agency. I argue that patients' bodies play a part in producing scans within acceptable limits of machines as set out through organisational arrangements. In doing so I argue that bodies are fabricated into the order of work prescribed and embedded within and around the CBCT system, becoming, not only the subject of resulting images, but part of that image. The scan is not therefore a representation of a passive subject (a body) but co-produced by the work of practitioners and patients who actively control (and contort) and discipline their body according to protocols and instructions and the CBCT system. In this way I suggest they are 'con-forming' the CBCT image. A Virtual Abstract of this paper can be found at: https://youtu.be/qysCcBGuNSM. © 2015 Foundation for the Sociology of Health & Illness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.
The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networksmore » and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.« less
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Boosting compound-protein interaction prediction by deep learning.
Tian, Kai; Shao, Mingyu; Wang, Yang; Guan, Jihong; Zhou, Shuigeng
2016-11-01
The identification of interactions between compounds and proteins plays an important role in network pharmacology and drug discovery. However, experimentally identifying compound-protein interactions (CPIs) is generally expensive and time-consuming, computational approaches are thus introduced. Among these, machine-learning based methods have achieved a considerable success. However, due to the nonlinear and imbalanced nature of biological data, many machine learning approaches have their own limitations. Recently, deep learning techniques show advantages over many state-of-the-art machine learning methods in some applications. In this study, we aim at improving the performance of CPI prediction based on deep learning, and propose a method called DL-CPI (the abbreviation of Deep Learning for Compound-Protein Interactions prediction), which employs deep neural network (DNN) to effectively learn the representations of compound-protein pairs. Extensive experiments show that DL-CPI can learn useful features of compound-protein pairs by a layerwise abstraction, and thus achieves better prediction performance than existing methods on both balanced and imbalanced datasets. Copyright © 2016 Elsevier Inc. All rights reserved.
Nofre, David; Priestley, Mark; Alberts, Gerard
2014-01-01
Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
DeepSynergy: predicting anti-cancer drug synergy with Deep Learning
Preuer, Kristina; Lewis, Richard P I; Hochreiter, Sepp; Bender, Andreas; Bulusu, Krishna C; Klambauer, Günter
2018-01-01
Abstract Motivation While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies. Results DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations. Availability and implementation DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy. Contact klambauer@bioinf.jku.at Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253077
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903
NASA Astrophysics Data System (ADS)
Pathak, Maharshi
City administrators and real-estate developers have been setting up rather aggressive energy efficiency targets. This, in turn, has led the building science research groups across the globe to focus on urban scale building performance studies and level of abstraction associated with the simulations of the same. The increasing maturity of the stakeholders towards energy efficiency and creating comfortable working environment has led researchers to develop methodologies and tools for addressing the policy driven interventions whether it's urban level energy systems, buildings' operational optimization or retrofit guidelines. Typically, these large-scale simulations are carried out by grouping buildings based on their design similarities i.e. standardization of the buildings. Such an approach does not necessarily lead to potential working inputs which can make decision-making effective. To address this, a novel approach is proposed in the present study. The principle objective of this study is to propose, to define and evaluate the methodology to utilize machine learning algorithms in defining representative building archetypes for the Stock-level Building Energy Modeling (SBEM) which are based on operational parameter database. The study uses "Phoenix- climate" based CBECS-2012 survey microdata for analysis and validation. Using the database, parameter correlations are studied to understand the relation between input parameters and the energy performance. Contrary to precedence, the study establishes that the energy performance is better explained by the non-linear models. The non-linear behavior is explained by advanced learning algorithms. Based on these algorithms, the buildings at study are grouped into meaningful clusters. The cluster "mediod" (statistically the centroid, meaning building that can be represented as the centroid of the cluster) are established statistically to identify the level of abstraction that is acceptable for the whole building energy simulations and post that the retrofit decision-making. Further, the methodology is validated by conducting Monte-Carlo simulations on 13 key input simulation parameters. The sensitivity analysis of these 13 parameters is utilized to identify the optimum retrofits. From the sample analysis, the envelope parameters are found to be more sensitive towards the EUI of the building and thus retrofit packages should also be directed to maximize the energy usage reduction.
New numerical approach for the modelling of machining applied to aeronautical structural parts
NASA Astrophysics Data System (ADS)
Rambaud, Pierrick; Mocellin, Katia
2018-05-01
The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
Machine-checked proofs of the design and implementation of a fault-tolerant circuit
NASA Technical Reports Server (NTRS)
Bevier, William R.; Young, William D.
1990-01-01
A formally verified implementation of the 'oral messages' algorithm of Pease, Shostak, and Lamport is described. An abstract implementation of the algorithm is verified to achieve interactive consistency in the presence of faults. This abstract characterization is then mapped down to a hardware level implementation which inherits the fault-tolerant characteristics of the abstract version. All steps in the proof were checked with the Boyer-Moore theorem prover. A significant results is the demonstration of a fault-tolerant device that is formally specified and whose implementation is proved correct with respect to this specification. A significant simplifying assumption is that the redundant processors behave synchronously. A mechanically checked proof that the oral messages algorithm is 'optimal' in the sense that no algorithm which achieves agreement via similar message passing can tolerate a larger proportion of faulty processor is also described.
Neural networks with fuzzy Petri nets for modeling a machining process
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.
1998-03-01
The paper presents an intelligent architecture based a feedforward neural network with fuzzy Petri nets for modeling product quality in a CNC machining center. It discusses how the proposed architecture can be used for modeling, monitoring and control a product quality specification such as surface roughness. The surface roughness represents the output quality specification manufactured by a CNC machining center as a result of a milling process. The neural network approach employed the selected input parameters which defined by the machine operator via the CNC code. The fuzzy Petri nets approach utilized the exact input milling parameters, such as spindle speed, feed rate, tool diameter and coolant (off/on), which can be obtained via the machine or sensors system. An aim of the proposed architecture is to model the demanded quality of surface roughness as high, medium or low.
Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information.
Barbera, Giovanni; Liang, Bo; Zhang, Lifeng; Gerfen, Charles R; Culurciello, Eugenio; Chen, Rong; Li, Yun; Lin, Da-Ting
2016-10-05
An influential striatal model postulates that neural activities in the striatal direct and indirect pathways promote and inhibit movement, respectively. Normal behavior requires coordinated activity in the direct pathway to facilitate intended locomotion and indirect pathway to inhibit unwanted locomotion. In this striatal model, neuronal population activity is assumed to encode locomotion relevant information. Here, we propose a novel encoding mechanism for the dorsal striatum. We identified spatially compact neural clusters in both the direct and indirect pathways. Detailed characterization revealed similar cluster organization between the direct and indirect pathways, and cluster activities from both pathways were correlated with mouse locomotion velocities. Using machine-learning algorithms, cluster activities could be used to decode locomotion relevant behavioral states and locomotion velocity. We propose that neural clusters in the dorsal striatum encode locomotion relevant information and that coordinated activities of direct and indirect pathway neural clusters are required for normal striatal controlled behavior. VIDEO ABSTRACT. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
Modeling Patterns of Activities using Activity Curves
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen
2016-01-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990
Modeling Patterns of Activities using Activity Curves.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen
2016-06-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.
10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.
Code of Federal Regulations, 2010 CFR
2010-01-01
... beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY... Vending Machines § 431.292 Definitions concerning refrigerated bottled or canned beverage vending machines. Basic model means, with respect to refrigerated bottled or canned beverage vending machines, all units...
Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H
2017-07-01
Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.
Kim, Dong Wook; Kim, Hwiyoung; Nam, Woong; Kim, Hyung Jun; Cha, In-Ho
2018-04-23
The aim of this study was to build and validate five types of machine learning models that can predict the occurrence of BRONJ associated with dental extraction in patients taking bisphosphonates for the management of osteoporosis. A retrospective review of the medical records was conducted to obtain cases and controls for the study. Total 125 patients consisting of 41 cases and 84 controls were selected for the study. Five machine learning prediction algorithms including multivariable logistic regression model, decision tree, support vector machine, artificial neural network, and random forest were implemented. The outputs of these models were compared with each other and also with conventional methods, such as serum CTX level. Area under the receiver operating characteristic (ROC) curve (AUC) was used to compare the results. The performance of machine learning models was significantly superior to conventional statistical methods and single predictors. The random forest model yielded the best performance (AUC = 0.973), followed by artificial neural network (AUC = 0.915), support vector machine (AUC = 0.882), logistic regression (AUC = 0.844), decision tree (AUC = 0.821), drug holiday alone (AUC = 0.810), and CTX level alone (AUC = 0.630). Machine learning methods showed superior performance in predicting BRONJ associated with dental extraction compared to conventional statistical methods using drug holiday and serum CTX level. Machine learning can thus be applied in a wide range of clinical studies. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Boussetoua, Mohammed
During winter, the climate in the northern region is known for its icing and freezing conditions. However, emergency services often use helicopters to reach isolated locations. The difficult situations, generally experiences in the North particularly in Quebec, may prevent rescuers to intervene. The main reason preventing such operations is the lack of a de-icing system in the small helicopter blades. The overall objective of the project is research, development, design and manufacture of a system composed of an on-board rotating low speed generator and heating elements. It consumes a part of the power supplied by the turbine through the axis of the main rotor of the small aircraft and converts it to electrical power to be used by the heating elements. This innovation will allow to fly safely everywhere throughout the year protect the lives of the users even in the worst weather conditions. Firstly, the research focuses on the identification of problems related to the use of protection systems against the hoarfrost on main rotor blades of different aircrafts during flight. In this phase, we specifically focused on the difficulties encountered by the aircraft companies using the existing and operational systems for protection against hoarfrost. Main rotor blades are difficult to protect on helicopters. Several systems were considered by the helicopter manufacturers, such as electrothermal systems, pneumatic systems or using anti-icing fluids. In the current state of technological knowledge, all helicopters that have been certified to fly in icing conditions use electrothermal systems for protection against hoarfrost on their main rotor Small helicopters addressed by this work, are forbidden to fly in icing conditions due to lack of energy source to operate these systems. The electrothermal system has been considered for this thesis work to protect the main rotor blades of small aircraft in-flight. The second part of this thesis is based on the source of power feeding the hearting system. In recent years, numerous research studies have started on the development of electromechanical system converters for various applications, such as transport by road, rail or aviation. The development of new low-speed, low-weight electric machines and their very high degree of compactness has become a very promising alternative. This project strongly interests many industries in the field of air transport. The transverse flux machine is considered as a compact structure having better mass power compared to other electrical machines. The design of transverse flux machine was the subject of an electromagnetic study. Also, the analytical study helped to determine the overall dimensions of the machine. The study was followed by a validation phase of the analytical model using numerical simulations. These two studies were intended to determine changes in the characteristics of the transverse flux machine according to the different geometric dimensions of its active parts. From the calculations made using analytical and numerical models, a prototype of the transverse flux machine (600 W, 320 RPM) was designed and manufactured in the AMIL laboratory at the Universite du Quebec a Chicoutimi (UQAC). A bench test was conducted to compare the theoretical and experimental results. The measurements obtained on this prototype were compared with the theoretical results. This phase of the study demonstrates with satisfaction, the reliability of the theoretical models developed. Finally, a new configuration of this machine has been proposed. Numerical simulation results of this structure are particularly encouraging and require further investigations. For logistical and financial reasons, the prototype of this configuration has not been manufactured. (Abstract shortened by UMI.)
A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia
NASA Astrophysics Data System (ADS)
Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.
2017-08-01
In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.
Index to NASA Tech Briefs, 1974
NASA Technical Reports Server (NTRS)
1975-01-01
The following information was given for 1974: (1) abstracts of reports dealing with new technology derived from the research and development activities of NASA or the U.S. Atomic Energy Commission, arranged by subjects: electronics/electrical, electronics/electrical systems, physical sciences, materials/chemistry, life sciences, mechanics, machines, equipment and tools, fabrication technology, and computer programs, (2) indexes for the above documents: subject, personal author, originating center.
Aerospace Medicine and Biology: A continuing bibliography with indexes (supplement 259)
NASA Technical Reports Server (NTRS)
1984-01-01
A bibliography containing 476 documents introduced into the NASA scientific and technical information system in May 1984 is presented. The primary subject categories included are: life sciences, aerospace medicine, behavioral sciences, man/system technology, life support, and planetary biology. Topics extensively represented were space flight stress, man machine systems, weightlessness, human performance, mental performance, and spacecraft environments. Abstracts for each citation are given.
Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick
2013-01-01
The available curated data lag behind current biological knowledge contained in the literature. Text mining can assist biologists and curators to locate and access this knowledge, for instance by characterizing the functional profile of publications. Gene Ontology (GO) category assignment in free text already supports various applications, such as powering ontology-based search engines, finding curation-relevant articles (triage) or helping the curator to identify and encode functions. Popular text mining tools for GO classification are based on so called thesaurus-based--or dictionary-based--approaches, which exploit similarities between the input text and GO terms themselves. But their effectiveness remains limited owing to the complex nature of GO terms, which rarely occur in text. In contrast, machine learning approaches exploit similarities between the input text and already curated instances contained in a knowledge base to infer a functional profile. GO Annotations (GOA) and MEDLINE make possible to exploit a growing amount of curated abstracts (97 000 in November 2012) for populating this knowledge base. Our study compares a state-of-the-art thesaurus-based system with a machine learning system (based on a k-Nearest Neighbours algorithm) for the task of proposing a functional profile for unseen MEDLINE abstracts, and shows how resources and performances have evolved. Systems are evaluated on their ability to propose for a given abstract the GO terms (2.8 on average) used for curation in GOA. We show that since 2006, although a massive effort was put into adding synonyms in GO (+300%), our thesaurus-based system effectiveness is rather constant, reaching from 0.28 to 0.31 for Recall at 20 (R20). In contrast, thanks to its knowledge base growth, our machine learning system has steadily improved, reaching from 0.38 in 2006 to 0.56 for R20 in 2012. Integrated in semi-automatic workflows or in fully automatic pipelines, such systems are more and more efficient to provide assistance to biologists. DATABASE URL: http://eagl.unige.ch/GOCat/
Probabilistic machine learning and artificial intelligence.
Ghahramani, Zoubin
2015-05-28
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Probabilistic machine learning and artificial intelligence
NASA Astrophysics Data System (ADS)
Ghahramani, Zoubin
2015-05-01
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
Machine learning modelling for predicting soil liquefaction susceptibility
NASA Astrophysics Data System (ADS)
Samui, P.; Sitharam, T. G.
2011-01-01
This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
NASA Astrophysics Data System (ADS)
Aalaei, Amin; Davoudpour, Hamid
2012-11-01
This article presents designing a new mathematical model for integrating dynamic cellular manufacturing into supply chain system with an extensive coverage of important manufacturing features consideration of multiple plants location, multi-markets allocation, multi-period planning horizons with demand and part mix variation, machine capacity, and the main constraints are demand of markets satisfaction in each period, machine availability, machine time-capacity, worker assignment, available time of worker, production volume for each plant and the amounts allocated to each market. The aim of the proposed model is to minimize holding and outsourcing costs, inter-cell material handling cost, external transportation cost, procurement & maintenance and overhead cost of machines, setup cost, reconfiguration cost of machines installation and removal, hiring, firing and salary worker costs. Aimed to prove the potential benefits of such a design, presented an example is shown using a proposed model.
NASA Astrophysics Data System (ADS)
Zhang, Chupeng; Zhao, Huiying; Zhu, Xueliang; Zhao, Shijie; Jiang, Chunye
2018-01-01
The chemical mechanical polishing (CMP) is a key process during the machining route of plane optics. To improve the polishing efficiency and accuracy, a CMP model and machine tool were developed. Based on the Preston equation and the axial run-out error measurement results of the m circles on the tin plate, a CMP model that could simulate the material removal at any point on the workpiece was presented. An analysis of the model indicated that lower axial run-out error led to lower material removal but better polishing efficiency and accuracy. Based on this conclusion, the CMP machine was designed, and the ultraprecision gas hydrostatic guideway and rotary table as well as the Siemens 840Dsl numerical control system were incorporated in the CMP machine. To verify the design principles of machine, a series of detection and machining experiments were conducted. The LK-G5000 laser sensor was employed for detecting the straightness error of the gas hydrostatic guideway and the axial run-out error of the gas hydrostatic rotary table. A 300-mm-diameter optic was chosen for the surface profile machining experiments performed to determine the CMP efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Rhee, Jinyoung; Kim, Gayoung; Im, Jungho
2017-04-01
Three regions of Indonesia with different rainfall characteristics were chosen to develop drought forecast models based on machine learning. The 6-month Standardized Precipitation Index (SPI6) was selected as the target variable. The models' forecast skill was compared to the skill of long-range climate forecast models in terms of drought accuracy and regression mean absolute error (MAE). Indonesian droughts are known to be related to El Nino Southern Oscillation (ENSO) variability despite of regional differences as well as monsoon, local sea surface temperature (SST), other large-scale atmosphere-ocean interactions such as Indian Ocean Dipole (IOD) and Southern Pacific Convergence Zone (SPCZ), and local factors including topography and elevation. Machine learning models are thus to enhance drought forecast skill by combining local and remote SST and remote sensing information reflecting initial drought conditions to the long-range climate forecast model results. A total of 126 machine learning models were developed for the three regions of West Java (JB), West Sumatra (SB), and Gorontalo (GO) and six long-range climate forecast models of MSC_CanCM3, MSC_CanCM4, NCEP, NASA, PNU, POAMA as well as one climatology model based on remote sensing precipitation data, and 1 to 6-month lead times. When compared the results between the machine learning models and the long-range climate forecast models, West Java and Gorontalo regions showed similar characteristics in terms of drought accuracy. Drought accuracy of the long-range climate forecast models were generally higher than the machine learning models with short lead times but the opposite appeared for longer lead times. For West Sumatra, however, the machine learning models and the long-range climate forecast models showed similar drought accuracy. The machine learning models showed smaller regression errors for all three regions especially with longer lead times. Among the three regions, the machine learning models developed for Gorontalo showed the highest drought accuracy and the lowest regression error. West Java showed higher drought accuracy compared to West Sumatra, while West Sumatra showed lower regression error compared to West Java. The lower error in West Sumatra may be because of the smaller sample size used for training and evaluation for the region. Regional differences of forecast skill are determined by the effect of ENSO and the following forecast skill of the long-range climate forecast models. While shown somewhat high in West Sumatra, relative importance of remote sensing variables was mostly low in most cases. High importance of the variables based on long-range climate forecast models indicates that the forecast skill of the machine learning models are mostly determined by the forecast skill of the climate models.
Modelling of human-machine interaction in equipment design of manufacturing cells
NASA Astrophysics Data System (ADS)
Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming
2017-08-01
This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.
Non-symmetric approach to single-screw expander and compressor modeling
NASA Astrophysics Data System (ADS)
Ziviani, Davide; Groll, Eckhard A.; Braun, James E.; Horton, W. Travis; De Paepe, M.; van den Broek, M.
2017-08-01
Single-screw type volumetric machines are employed both as compressors in refrigeration systems and, more recently, as expanders in organic Rankine cycle (ORC) applications. The single-screw machine is characterized by having a central grooved rotor and two mating toothed starwheels that isolate the working chambers. One of the main features of such machine is related to the simultaneous occurrence of the compression or expansion processes on both sides of the main rotor which results in a more balanced loading on the main shaft bearings with respect to twin-screw machines. However, the meshing between starwheels and main rotor is a critical aspect as it heavily affects the volumetric performance of the machine. To allow flow interactions between the two sides of the rotor, a non-symmetric modelling approach has been established to obtain a more comprehensive model of the single-screw machine. The resulting mechanistic model includes in-chamber governing equations, leakage flow models, heat transfer mechanisms, viscous and mechanical losses. Forces and moments balances are used to estimate the loads on the main shaft bearings as well as on the starwheel bearings. An 11 kWe single-screw expander (SSE) adapted from an air compressor operating with R245fa as working fluid is used to validate the model. A total of 60 steady-steady points at four different rotational speeds have been collected to characterize the performance of the machine. The maximum electrical power output and overall isentropic efficiency measured were 7.31 kW and 51.91%, respectively.
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools
NASA Astrophysics Data System (ADS)
Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu
2018-03-01
Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.
Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K
2015-01-01
Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
The dynamic analysis of drum roll lathe for machining of rollers
NASA Astrophysics Data System (ADS)
Qiao, Zheng; Wu, Dongxu; Wang, Bo; Li, Guo; Wang, Huiming; Ding, Fei
2014-08-01
An ultra-precision machine tool for machining of the roller has been designed and assembled, and due to the obvious impact which dynamic characteristic of machine tool has on the quality of microstructures on the roller surface, the dynamic characteristic of the existing machine tool is analyzed in this paper, so is the influence of circumstance that a large scale and slender roller is fixed in the machine on dynamic characteristic of the machine tool. At first, finite element model of the machine tool is built and simplified, and based on that, the paper carries on with the finite element mode analysis and gets the natural frequency and shaking type of four steps of the machine tool. According to the above model analysis results, the weak stiffness systems of machine tool can be further improved and the reasonable bandwidth of control system of the machine tool can be designed. In the end, considering the shock which is caused by Z axis as a result of fast positioning frequently to feeding system and cutting tool, transient analysis is conducted by means of ANSYS analysis in this paper. Based on the results of transient analysis, the vibration regularity of key components of machine tool and its impact on cutting process are explored respectively.
ASCR Cybersecurity for Scientific Computing Integrity - Research Pathways and Ideas Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean; Potok, Thomas E.; Jones, Todd
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues includedmore » research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.« less
Marafino, Ben J; Davies, Jason M; Bardach, Naomi S; Dean, Mitzi L; Dudley, R Adams
2014-01-01
Existing risk adjustment models for intensive care unit (ICU) outcomes rely on manual abstraction of patient-level predictors from medical charts. Developing an automated method for abstracting these data from free text might reduce cost and data collection times. To develop a support vector machine (SVM) classifier capable of identifying a range of procedures and diagnoses in ICU clinical notes for use in risk adjustment. We selected notes from 2001-2008 for 4191 neonatal ICU (NICU) and 2198 adult ICU patients from the MIMIC-II database from the Beth Israel Deaconess Medical Center. Using these notes, we developed an implementation of the SVM classifier to identify procedures (mechanical ventilation and phototherapy in NICU notes) and diagnoses (jaundice in NICU and intracranial hemorrhage (ICH) in adult ICU). On the jaundice classification task, we also compared classifier performance using n-gram features to unigrams with application of a negation algorithm (NegEx). Our classifier accurately identified mechanical ventilation (accuracy=0.982, F1=0.954) and phototherapy use (accuracy=0.940, F1=0.912), as well as jaundice (accuracy=0.898, F1=0.884) and ICH diagnoses (accuracy=0.938, F1=0.943). Including bigram features improved performance on the jaundice (accuracy=0.898 vs 0.865) and ICH (0.938 vs 0.927) tasks, and outperformed NegEx-derived unigram features (accuracy=0.898 vs 0.863) on the jaundice task. Overall, a classifier using n-gram support vectors displayed excellent performance characteristics. The classifier generalizes to diverse patient populations, diagnoses, and procedures. SVM-based classifiers can accurately identify procedure status and diagnoses among ICU patients, and including n-gram features improves performance, compared to existing methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Creating an Electronic Reference and Information Database for Computer-aided ECM Design
NASA Astrophysics Data System (ADS)
Nekhoroshev, M. V.; Pronichev, N. D.; Smirnov, G. V.
2018-01-01
The paper presents a review on electrochemical shaping. An algorithm has been developed to implement a computer shaping model applicable to pulse electrochemical machining. For that purpose, the characteristics of pulse current occurring in electrochemical machining of aviation materials have been studied. Based on integrating the experimental results and comprehensive electrochemical machining process data modeling, a subsystem for computer-aided design of electrochemical machining for gas turbine engine blades has been developed; the subsystem was implemented in the Teamcenter PLM system.
NASA Astrophysics Data System (ADS)
Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei
2014-10-01
This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Improving Energy Efficiency in CNC Machining
NASA Astrophysics Data System (ADS)
Pavanaskar, Sushrut S.
We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.
Rapid Prototyping in Technology Education.
ERIC Educational Resources Information Center
Flowers, Jim; Moniz, Matt
2002-01-01
Describes how technology education majors are using a high-tech model builder, called a fused deposition modeling machine, to develop their models directly from computer-based designs without any machining. Gives examples of applications in technology education. (JOW)
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Next-Generation Machine Learning for Biological Networks.
Camacho, Diogo M; Collins, Katherine M; Powers, Rani K; Costello, James C; Collins, James J
2018-06-14
Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology. Copyright © 2018 Elsevier Inc. All rights reserved.
Modelling daily water temperature from air temperature for the Missouri River.
Zhu, Senlin; Nyarko, Emmanuel Karlo; Hadzima-Nyarko, Marijana
2018-01-01
The bio-chemical and physical characteristics of a river are directly affected by water temperature, which thereby affects the overall health of aquatic ecosystems. It is a complex problem to accurately estimate water temperature. Modelling of river water temperature is usually based on a suitable mathematical model and field measurements of various atmospheric factors. In this article, the air-water temperature relationship of the Missouri River is investigated by developing three different machine learning models (Artificial Neural Network (ANN), Gaussian Process Regression (GPR), and Bootstrap Aggregated Decision Trees (BA-DT)). Standard models (linear regression, non-linear regression, and stochastic models) are also developed and compared to machine learning models. Analyzing the three standard models, the stochastic model clearly outperforms the standard linear model and nonlinear model. All the three machine learning models have comparable results and outperform the stochastic model, with GPR having slightly better results for stations No. 2 and 3, while BA-DT has slightly better results for station No. 1. The machine learning models are very effective tools which can be used for the prediction of daily river temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanchurin, Vitaly, E-mail: vvanchur@d.umn.edu
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly,more » CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.« less
Reversibility and measurement in quantum computing
NASA Astrophysics Data System (ADS)
Leãao, J. P.
1998-03-01
The relation between computation and measurement at a fundamental physical level is yet to be understood. Rolf Landauer was perhaps the first to stress the strong analogy between these two concepts. His early queries have regained pertinence with the recent efforts to developed realizable models of quantum computers. In this context the irreversibility of quantum measurement appears in conflict with the requirement of reversibility of the overall computation associated with the unitary dynamics of quantum evolution. The latter in turn is responsible for the features of superposition and entanglement which make some quantum algorithms superior to classical ones for the same task in speed and resource demand. In this article we advocate an approach to this question which relies on a model of computation designed to enforce the analogy between the two concepts instead of demarcating them as it has been the case so far. The model is introduced as a symmetrization of the classical Turing machine model and is then carried on to quantum mechanics, first as a an abstract local interaction scheme (symbolic measurement) and finally in a nonlocal noninteractive implementation based on Aharonov-Bohm potentials and modular variables. It is suggested that this implementation leads to the most ubiquitous of quantum algorithms: the Discrete Fourier Transform.
Automated Design of Complex Dynamic Systems
Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni
2014-01-01
Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Jaspers, Arne; De Beéck, Tim Op; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F
2018-05-01
Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators (ELIs) and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over 2 seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using 2 machine learning techniques, artificial neural networks and least absolute shrinkage and selection operator (LASSO) models, and 1 naive baseline method. The predictions were based on a large set of ELIs. Using each technique, 1 group model involving all players and 1 individual model for each player were constructed. These models' performance on predicting the reported RPE values for future training sessions was compared with the naive baseline's performance. Both the artificial neural network and LASSO models outperformed the baseline. In addition, the LASSO model made more accurate predictions for the RPE than did the artificial neural network model. Furthermore, decelerations were identified as important ELIs. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting RPE for future sessions to optimize training design and evaluation. These techniques may also be used in conjunction with expert knowledge to select key ELIs for load monitoring.
ERIC Educational Resources Information Center
Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.
2009-01-01
This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL ...
9. VIEW, LOOKING SOUTH, OF INTERLOCKING MACHINE, WITH ORIGINAL MODEL BOARD IN CENTER, NEW MODEL BOARD AT LEFT AND MODEL SEMAPHORES AT TOP OF PHOTOGRAPH, THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA
Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.
Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745
A system framework of inter-enterprise machining quality control based on fractal theory
NASA Astrophysics Data System (ADS)
Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng
2014-03-01
In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.
NASA Astrophysics Data System (ADS)
Liu, Shuang; Liu, Fei; Hu, Shaohua; Yin, Zhenbiao
The major power information of the main transmission system in machine tools (MTSMT) during machining process includes effective output power (i.e. cutting power), input power and power loss from the mechanical transmission system, and the main motor power loss. These information are easy to obtain in the lab but difficult to evaluate in a manufacturing process. To solve this problem, a separation method is proposed here to extract the MTSMT power information during machining process. In this method, the energy flow and the mathematical models of major power information of MTSMT during the machining process are set up first. Based on the mathematical models and the basic data tables obtained from experiments, the above mentioned power information during machining process can be separated just by measuring the real time total input power of the spindle motor. The operation program of this method is also given.
Analyzing Array Manipulating Programs by Program Transformation
NASA Technical Reports Server (NTRS)
Cornish, J. Robert M.; Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.
2014-01-01
We explore a transformational approach to the problem of verifying simple array-manipulating programs. Traditionally, verification of such programs requires intricate analysis machinery to reason with universally quantified statements about symbolic array segments, such as "every data item stored in the segment A[i] to A[j] is equal to the corresponding item stored in the segment B[i] to B[j]." We define a simple abstract machine which allows for set-valued variables and we show how to translate programs with array operations to array-free code for this machine. For the purpose of program analysis, the translated program remains faithful to the semantics of array manipulation. Based on our implementation in LLVM, we evaluate the approach with respect to its ability to extract useful invariants and the cost in terms of code size.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
Improving the Automated Detection and Analysis of Secure Coding Violations
2014-06-01
eliminating software vulnerabilities and other flaws. The CERT Division produces books and courses that foster a security mindset in developers, and...website also provides a virtual machine containing a complete build of the Rosecheckers project on Linux . The Rosecheckers project leverages the...Compass/ROSE6 project developed at Law- rence Livermore National Laboratory. This project provides a high-level API for accessing the abstract syntax tree
Dome: Distributed Object Migration Environment
1994-05-01
Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the
Learning by Reading for Robust Reasoning in Intelligent Agents
2018-04-24
SUPPLEMENTARY NOTES 14. ABSTRACT Our hypotheses are that analogical processing plays multiple roles in enabling machines to learn by reading, and that...systems). Our overall hypotheses are that analogical processing plays multiple roles in learning by reading, and that qualitative representations provide...from reading this text? Narrative function can be seen as a kind of communication act, but the idea goes a bit beyond that. Communication acts are
Literature Mining of Pathogenesis-Related Proteins in Human Pathogens for Database Annotation
2009-10-01
person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control...submission and for literature mining result display with automatically tagged abstracts. I. Literature data sets for machine learning algorithm training...mass spectrometry) proteomics data from Burkholderia strains. • Task1 ( M13 -15): Preliminary analysis of the Burkholderia proteomic space
New Abstractions for Mobile Connectivity and Resource Management
2016-05-01
networked systems, con- sisting of replicated backend services and mobile , multi-homed clients. We derive a state machine for ECCP supporting migration...makes ECCP useful not only for mobility of client devices, but also for backend services which are increasingly run in VMs or containers on platforms...layers of the network stack, instead of the traditional IP/port, improve mobility for clients and backend services and reduce unnecessary coupling of
Experimental Evaluation of Cold-Sprayed Copper Rotating Bands for Large-Caliber Projectiles
2015-05-01
ABSTRACT A copper rotating band is the munition component responsible for both obturation and transfer of torque from the gun barrel’s rifling to the...munition, thereby causing the projectile to spin. Pure copper, copper alloy, and brass rotating bands are typically fabricated to steel munitions using...Machine Shop for fabrication; and the Transonic Experimental Facility for facilitating the gun -launch experiments. vi INTENTIONALLY LEFT BLANK
USSR and Eastern Europe Scientific Abstracts, Biomedical and Behavioral Sciences, Number 81.
1977-11-28
Hydrobiology 21 Industrial Microbiology 22 Industrial Toxicology 31 Marine Mammals 35 Microbiology 36 Molecular Biology 38 Neuros ciences...in progress. Factors involved in increasing productivity were calculated and presented in 4 tables: duration of use of equipment in 1 day (hours...machines no longer in production but omits materials on some new equipment and some new forms of organization of the work of the agrochemical
Computer Generation of Natural Language from a Deep Conceptual Base
1974-01-01
It would be useful to have machines which could read scientific documents, newspaper articles , novels, etc., and translate them into other...preparing abstracts :or articles and in headline writing (at least in those cases in which headlines are used as an indication of article content...above), a definite or indefinite article is attached to the noun phrase. The selection of color and size adjectives is made in .. fashion
Building a protein name dictionary from full text: a machine learning term extraction approach.
Shi, Lei; Campagne, Fabien
2005-04-07
The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt.
Building a protein name dictionary from full text: a machine learning term extraction approach
Shi, Lei; Campagne, Fabien
2005-01-01
Background The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. Results We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. Conclusion This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt. PMID:15817129
Athanasopoulos, Panagiotis G.; Hadjittofi, Christopher; Dharmapala, Arinda Dinesh; Orti-Rodriguez, Rafael Jose; Ferro, Alessandra; Nasralla, David; Konstantinidou, Sofia K.; Malagó, Massimo
2016-01-01
Abstract Donor organ shortage continues to limit the availability of liver transplantation, a successful and established therapy of end-stage liver diseases. Strategies to mitigate graft shortage include the utilization of marginal livers and recently ex-situ normothermic machine perfusion devices. A 59-year-old woman with cirrhosis due to primary sclerosing cholangitis was offered an ex-situ machine perfused graft with unnoticed severe injury of the suprahepatic vasculature due to road traffic accident. Following a complex avulsion, repair and reconstruction of all donor hepatic veins as well as the suprahepatic inferior vena cava, the patient underwent a face-to-face piggy-back orthotopic liver transplantation and was discharged on the 11th postoperative day after an uncomplicated recovery. This report illustrates the operative technique to utilize an otherwise unusable organ, in the current environment of donor shortage and declining graft quality. Normothermic machine perfusion can definitely play a role in increasing the graft pool, without compromising the quality of livers who had vascular or other damage before being ex-situ perfused. Furthermore, it emphasizes the importance of promptly and thoroughly communicating organ injuries, as well as considering all reconstructive options within the level of expertise at the recipient center. PMID:27082550
Clark, Alex M; Williams, Antony J; Ekins, Sean
2015-01-01
The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats. Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms.
Efficient Checkpointing of Virtual Machines using Virtual Machine Introspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderholdt, Ferrol; Han, Fang; Scott, Stephen L
Cloud Computing environments rely heavily on system-level virtualization. This is due to the inherent benefits of virtualization including fault tolerance through checkpoint/restart (C/R) mechanisms. Because clouds are the abstraction of large data centers and large data centers have a higher potential for failure, it is imperative that a C/R mechanism for such an environment provide minimal latency as well as a small checkpoint file size. Recently, there has been much research into C/R with respect to virtual machines (VM) providing excellent solutions to reduce either checkpoint latency or checkpoint file size. However, these approaches do not provide both. This papermore » presents a method of checkpointing VMs by utilizing virtual machine introspection (VMI). Through the usage of VMI, we are able to determine which pages of memory within the guest are used or free and are better able to reduce the amount of pages written to disk during a checkpoint. We have validated this work by using various benchmarks to measure the latency along with the checkpoint size. With respect to checkpoint file size, our approach results in file sizes within 24% or less of the actual used memory within the guest. Additionally, the checkpoint latency of our approach is up to 52% faster than KVM s default method.« less
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
ChargeOut! : discounted cash flow compared with traditional machine-rate analysis
Ted Bilek
2008-01-01
ChargeOut!, a discounted cash-flow methodology in spreadsheet format for analyzing machine costs, is compared with traditional machine-rate methodologies. Four machine-rate models are compared and a common data set representative of logging skiddersâ costs is used to illustrate the differences between ChargeOut! and the machine-rate methods. The study found that the...
The research on construction and application of machining process knowledge base
NASA Astrophysics Data System (ADS)
Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai
2018-03-01
In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.
Programming languages and compiler design for realistic quantum hardware.
Chong, Frederic T; Franklin, Diana; Martonosi, Margaret
2017-09-13
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Programming languages and compiler design for realistic quantum hardware
NASA Astrophysics Data System (ADS)
Chong, Frederic T.; Franklin, Diana; Martonosi, Margaret
2017-09-01
Quantum computing sits at an important inflection point. For years, high-level algorithms for quantum computers have shown considerable promise, and recent advances in quantum device fabrication offer hope of utility. A gap still exists, however, between the hardware size and reliability requirements of quantum computing algorithms and the physical machines foreseen within the next ten years. To bridge this gap, quantum computers require appropriate software to translate and optimize applications (toolflows) and abstraction layers. Given the stringent resource constraints in quantum computing, information passed between layers of software and implementations will differ markedly from in classical computing. Quantum toolflows must expose more physical details between layers, so the challenge is to find abstractions that expose key details while hiding enough complexity.
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
Zhang, A; Critchley, S; Monsour, P A
2016-12-01
The aim of the present study was to assess the current adoption of cone beam computed tomography (CBCT) and panoramic radiography (PR) machines across Australia. Information regarding registered CBCT and PR machines was obtained from radiation regulators across Australia. The number of X-ray machines was correlated with the population size, the number of dentists, and the gross state product (GSP) per capita, to determine the best fitting regression model(s). In 2014, there were 232 CBCT and 1681 PR machines registered in Australia. Based on absolute counts, Queensland had the largest number of CBCT and PR machines whereas the Northern Territory had the smallest number. However, when based on accessibility in terms of the population size and the number of dentists, the Australian Capital Territory had the most CBCT machines and Western Australia had the most PR machines. The number of X-ray machines correlated strongly with both the population size and the number of dentists, but not with the GSP per capita. In 2014, the ratio of PR to CBCT machines was approximately 7:1. Projected increases in either the population size or the number of dentists could positively impact on the adoption of PR and CBCT machines in Australia. © 2016 Australian Dental Association.
Design and implementation of a system for laser assisted milling of advanced materials
NASA Astrophysics Data System (ADS)
Wu, Xuefeng; Feng, Gaocheng; Liu, Xianli
2016-09-01
Laser assisted machining is an effective method to machine advanced materials with the added benefits of longer tool life and increased material removal rates. While extensive studies have investigated the machining properties for laser assisted milling(LAML), few attempts have been made to extend LAML to machining parts with complex geometric features. A methodology for continuous path machining for LAML is developed by integration of a rotary and movable table into an ordinary milling machine with a laser beam system. The machining strategy and processing path are investigated to determine alignment of the machining path with the laser spot. In order to keep the material removal temperatures above the softening temperature of silicon nitride, the transformation is coordinated and the temperature interpolated, establishing a transient thermal model. The temperatures of the laser center and cutting zone are also carefully controlled to achieve optimal machining results and avoid thermal damage. These experiments indicate that the system results in no surface damage as well as good surface roughness, validating the application of this machining strategy and thermal model in the development of a new LAML system for continuous path processing of silicon nitride. The proposed approach can be easily applied in LAML system to achieve continuous processing and improve efficiency in laser assisted machining.
Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann
2003-01-01
Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Zhao, Di; Weng, Chunhua
2011-10-01
In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.
Zhao, Di; Weng, Chunhua
2011-01-01
In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. PMID:21642013
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
NASA Astrophysics Data System (ADS)
Chetan; Narasimhulu, A.; Ghosh, S.; Rao, P. V.
2015-07-01
Machinability of titanium is poor due to its low thermal conductivity and high chemical affinity. Lower thermal conductivity of titanium alloy is undesirable on the part of cutting tool causing extensive tool wear. The main task of this work is to predict the various wear mechanisms involved during machining of Ti alloy (Ti6Al4V) and to formulate an analytical mathematical tool wear model for the same. It has been found from various experiments that adhesive and diffusion wear are the dominating wear during machining of Ti alloy with PVD coated tungsten carbide tool. It is also clear from the experiments that the tool wear increases with the increase in cutting parameters like speed, feed and depth of cut. The wear model was validated by carrying out dry machining of Ti alloy at suitable cutting conditions. It has been found that the wear model is able to predict the flank wear suitably under gentle cutting conditions.
8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL ...
8. VIEW, LOOKING NORTH, OF INTERLOCKING MACHINE WITH ORIGINAL MODEL BOARD IN CENTER AND MODEL SEMAPHORE SIGNALS (AT TOP OF PHOTOGRAPH), THIRD FLOOR - South Station Tower No. 1 & Interlocking System, Dewey Square, Boston, Suffolk County, MA
A strategy to apply machine learning to small datasets in materials science
NASA Astrophysics Data System (ADS)
Zhang, Ying; Ling, Chen
2018-12-01
There is growing interest in applying machine learning techniques in the research of materials science. However, although it is recognized that materials datasets are typically smaller and sometimes more diverse compared to other fields, the influence of availability of materials data on training machine learning models has not yet been studied, which prevents the possibility to establish accurate predictive rules using small materials datasets. Here we analyzed the fundamental interplay between the availability of materials data and the predictive capability of machine learning models. Instead of affecting the model precision directly, the effect of data size is mediated by the degree of freedom (DoF) of model, resulting in the phenomenon of association between precision and DoF. The appearance of precision-DoF association signals the issue of underfitting and is characterized by large bias of prediction, which consequently restricts the accurate prediction in unknown domains. We proposed to incorporate the crude estimation of property in the feature space to establish ML models using small sized materials data, which increases the accuracy of prediction without the cost of higher DoF. In three case studies of predicting the band gap of binary semiconductors, lattice thermal conductivity, and elastic properties of zeolites, the integration of crude estimation effectively boosted the predictive capability of machine learning models to state-of-art levels, demonstrating the generality of the proposed strategy to construct accurate machine learning models using small materials dataset.
NASA Astrophysics Data System (ADS)
Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos
2017-06-01
The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.
Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan
Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Machine learning for medical images analysis.
Criminisi, A
2016-10-01
This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Spatial-temporal modeling of malware propagation in networks.
Chen, Zesheng; Ji, Chuanyi
2005-09-01
Network security is an important task of network management. One threat to network security is malware (malicious software) propagation. One type of malware is called topological scanning that spreads based on topology information. The focus of this work is on modeling the spread of topological malwares, which is important for understanding their potential damages, and for developing countermeasures to protect the network infrastructure. Our model is motivated by probabilistic graphs, which have been widely investigated in machine learning. We first use a graphical representation to abstract the propagation of malwares that employ different scanning methods. We then use a spatial-temporal random process to describe the statistical dependence of malware propagation in arbitrary topologies. As the spatial dependence is particularly difficult to characterize, the problem becomes how to use simple (i.e., biased) models to approximate the spatially dependent process. In particular, we propose the independent model and the Markov model as simple approximations. We conduct both theoretical analysis and extensive simulations on large networks using both real measurements and synthesized topologies to test the performance of the proposed models. Our results show that the independent model can capture temporal dependence and detailed topology information and, thus, outperforms the previous models, whereas the Markov model incorporates a certain spatial dependence and, thus, achieves a greater accuracy in characterizing both transient and equilibrium behaviors of malware propagation.
Machine learning for epigenetics and future medical applications
Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.
2017-01-01
ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769
Thermal-mechanical modeling of laser ablation hybrid machining
NASA Astrophysics Data System (ADS)
Matin, Mohammad Kaiser
2001-08-01
Hard, brittle and wear-resistant materials like ceramics pose a problem when being machined using conventional machining processes. Machining ceramics even with a diamond cutting tool is very difficult and costly. Near net-shape processes, like laser evaporation, produce micro-cracks that require extra finishing. Thus it is anticipated that ceramic machining will have to continue to be explored with new-sprung techniques before ceramic materials become commonplace. This numerical investigation results from the numerical simulations of the thermal and mechanical modeling of simultaneous material removal from hard-to-machine materials using both laser ablation and conventional tool cutting utilizing the finite element method. The model is formulated using a two dimensional, planar, computational domain. The process simulation acronymed, LAHM (Laser Ablation Hybrid Machining), uses laser energy for two purposes. The first purpose is to remove the material by ablation. The second purpose is to heat the unremoved material that lies below the ablated material in order to ``soften'' it. The softened material is then simultaneously removed by conventional machining processes. The complete solution determines the temperature distribution and stress contours within the material and tracks the moving boundary that occurs due to material ablation. The temperature distribution is used to determine the distance below the phase change surface where sufficient ``softening'' has occurred, so that a cutting tool may be used to remove additional material. The model incorporated for tracking the ablative surface does not assume an isothermal melt phase (e.g. Stefan problem) for laser ablation. Both surface absorption and volume absorption of laser energy as function of depth have been considered in the models. LAHM, from the thermal and mechanical point of view is a complex machining process involving large deformations at high strain rates, thermal effects of the laser, removal of materials and contact between workpiece and tool. The theoretical formulation associated with LAHM for solving the thermal-mechanical problem using the finite element method is presented. The thermal formulation is incorporated in the user defined subroutines called by ABAQUS/Standard. The mechanical portion is modeled using ABAQUS/Explicit's general capabilities of modeling interactions involving contact and separation. The results obtained from the FEA simulations showed that the cutting force decrease considerably in both LAEM Surface Absorption (LARM-SA) and LAHM volume absorption (LAHM-VA) models relative to LAM model. It was observed that the HAZ can be expanded or narrowed depending on the laser speed and power. The cutting force is minimal at the last extent of the HAZ. In both the models the laser ablates material thus reducing material stiffness as well as relaxing the thermal stress. The stress values obtained showed compressive yield stress just below the ablated surface and chip. The failure occurs by conventional cutting where tensile stress exceeds the tensile strength of the material at that temperature. In this hybrid machining process the advantages of both the individual machining processes were realized.
Critical Speed of The Glass Glue Machine's Creep and Influence Factors Analysis
NASA Astrophysics Data System (ADS)
Yang, Jianxi; Huang, Jian; Wang, Liying; Shi, Jintai
When automatic glass glue machine works, two questions of the machine starting vibrating and stick-slip motion are existing. These problems should be solved. According to these questions, a glue machine's model for studying stick-slip is established. Based on the dynamics system describing of the model, mathematical expression is presented. The creep critical speed expression is constructed referring to existing research achievement and a new conclusion is found. The influencing factors of stiffness, dampness, mass, velocity, difference of static and kinetic coefficient of friction are analyzed through Matlab simulation. Research shows that reasonable choice of influence parameters can improve the creep phenomenon. These all supply the theory evidence for improving the machine's motion stability.
Prostate Cancer Probability Prediction By Machine Learning Technique.
Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena
2017-11-26
The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.
Gortais, Bernard
2003-01-01
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659
Gortais, Bernard
2003-07-29
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.
Dynamic modeling of brushless dc motors for aerospace actuation
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Nehl, T. W.
1980-01-01
A discrete time model for simulation of the dynamics of samarium cobalt-type permanent magnet brushless dc machines is presented. The simulation model includes modeling of the interaction between these machines and their attached power conditioners. These are transistorized conditioner units. This model is part of an overall discrete-time analysis of the dynamic performance of electromechanical actuators, which was conducted as part of prototype development of such actuators studied and built for NASA-Johnson Space Center as a prospective alternative to hydraulic actuators presently used in shuttle orbiter applications. The resulting numerical simulations of the various machine and power conditioner current and voltage waveforms gave excellent correlation to the actual waveforms collected from actual hardware experimental testing. These results, numerical and experimental, are presented here for machine motoring, regeneration and dynamic braking modes. Application of the resulting model to the determination of machine current and torque profiles during closed-loop actuator operation were also analyzed and the results are given here. These results are given in light of an overall view of the actuator system components. The applicability of this method of analysis to design optimization and trouble-shooting in such prototype development is also discussed in light of the results at hand.
Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L
2017-01-01
Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. Results We are currently writing Auto-ML’s design document. We intend to finish our study by around the year 2022. Conclusions Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. PMID:28851678
Naderi, Peyman
2016-09-01
The inter-turn short fault for the Cage-Rotor-Induction-Machine (CRIM) is studied in this paper and its local saturation is taken into account. However, in order to observe the exact behavior of machine, the Magnetic-Equivalent-Circuit (MEC) and nonlinear B-H curve are proposed to provide an insight into the machine model and saturation effect respectively. The electrical machines are generally operated near to their saturation zone due to some design necessities. Hence, when the machine is exposed to a fault such as short circuit or eccentricities, it is operated within its saturation zone and thus, time and space harmonics are integrated and as a result, current and torque harmonics are generated which the phenomenon cannot be explored when saturation is dismissed. Nonetheless, inter-turn short circuit may lead to local saturation and this occurrence is studied in this paper using MEC model. In order to achieve the mentioned objectives, two and also four-pole machines are modeled as two samples and the machines performances are analyzed in healthy and faulty cases with and without saturation effect. A novel strategy is proposed to precisely detect inter-turn short circuit fault according to the stator׳s lines current signatures and the accuracy of the proposed method is verified by experimental results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
ezTag: tagging biomedical concepts via interactive learning.
Kwon, Dongseop; Kim, Sun; Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong
2018-05-18
Recently, advanced text-mining techniques have been shown to speed up manual data curation by providing human annotators with automated pre-annotations generated by rules or machine learning models. Due to the limited training data available, however, current annotation systems primarily focus only on common concept types such as genes or diseases. To support annotating a wide variety of biological concepts with or without pre-existing training data, we developed ezTag, a web-based annotation tool that allows curators to perform annotation and provide training data with humans in the loop. ezTag supports both abstracts in PubMed and full-text articles in PubMed Central. It also provides lexicon-based concept tagging as well as the state-of-the-art pre-trained taggers such as TaggerOne, GNormPlus and tmVar. ezTag is freely available at http://eztag.bioqrator.org.
NASA Technical Reports Server (NTRS)
Yan, Jerry C.
1987-01-01
In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.
A new technique for simulating composite material
NASA Technical Reports Server (NTRS)
Volakis, John L.
1991-01-01
This project dealt with the development on new methodologies and algorithms for the multi-spectrum electromagnetic characterization of large scale nonmetallic airborne vehicles and structures. A robust, low memory, and accurate methodology was developed which is particularly suited for modern machine architectures. This is a hybrid finite element method that combines two well known numerical solution approaches. That of the finite element method for modeling volumes and the boundary integral method which yields exact boundary conditions for terminating the finite element mesh. In addition, a variety of high frequency results were generated (such as diffraction coefficients for impedance surfaces and material layers) and a class of boundary conditions were developed which hold promise for more efficient simulations. During the course of this project, nearly 25 detailed research reports were generated along with an equal number of journal papers. The reports, papers, and journal articles are listed in the appendices along with their abstracts.
Computational consciousness: building a self-preserving organism.
Barros, Allan Kardec
2010-01-01
Consciousness has been a subject of crescent interest among the neuroscience community. However, building machine models of it is quite challenging, as it involves many characteristics and properties of the human brain which are poorly defined or are very abstract. Here I propose to use information theory (IT) to give a mathematical framework to understand consciousness. For this reason, I used the term "computational". This work is grounded on some recent results on the use of IT to understand how the cortex codes information, where redundancy reduction plays a fundamental role. Basically, I propose a system, here called "organism", whose strategy is to extract the maximal amount of information from the environment in order to survive. To highlight the proposed framework, I show a simple organism composed of a single neuron which adapts itself to the outside dynamics by taking into account its internal state, whose perception is understood here to be related to "feelings".
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
Park, Eunjeong; Chang, Hyuk-Jae; Nam, Hyo Suk
2017-04-18
The pronator drift test (PDT), a neurological examination, is widely used in clinics to measure motor weakness of stroke patients. The aim of this study was to develop a PDT tool with machine learning classifiers to detect stroke symptoms based on quantification of proximal arm weakness using inertial sensors and signal processing. We extracted features of drift and pronation from accelerometer signals of wearable devices on the inner wrists of 16 stroke patients and 10 healthy controls. Signal processing and feature selection approach were applied to discriminate PDT features used to classify stroke patients. A series of machine learning techniques, namely support vector machine (SVM), radial basis function network (RBFN), and random forest (RF), were implemented to discriminate stroke patients from controls with leave-one-out cross-validation. Signal processing by the PDT tool extracted a total of 12 PDT features from sensors. Feature selection abstracted the major attributes from the 12 PDT features to elucidate the dominant characteristics of proximal weakness of stroke patients using machine learning classification. Our proposed PDT classifiers had an area under the receiver operating characteristic curve (AUC) of .806 (SVM), .769 (RBFN), and .900 (RF) without feature selection, and feature selection improves the AUCs to .913 (SVM), .956 (RBFN), and .975 (RF), representing an average performance enhancement of 15.3%. Sensors and machine learning methods can reliably detect stroke signs and quantify proximal arm weakness. Our proposed solution will facilitate pervasive monitoring of stroke patients. ©Eunjeong Park, Hyuk-Jae Chang, Hyo Suk Nam. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.04.2017.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-08-24
This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less
Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi
2015-09-02
This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less
Modeling of heat transfer in compacted machining chips during friction consolidation process
NASA Astrophysics Data System (ADS)
Abbas, Naseer; Deng, Xiaomin; Li, Xiao; Reynolds, Anthony
2018-04-01
The current study aims to provide an understanding of the heat transfer process in compacted aluminum alloy AA6061 machining chips during the friction consolidation process (FCP) through experimental investigations and mathematical modelling and numerical simulation. Compaction and friction consolidation of machining chips is the first stage of the Friction Extrusion Process (FEP), which is a novel method for recycling machining chips to produce useful products such as wires. In this study, compacted machining chips are modelled as a continuum whose material properties vary with density during friction consolidation. Based on density and temperature dependent thermal properties, the temperature field in the chip material and process chamber caused by frictional heating during the friction consolidation process is predicted. The predicted temperature field is found to compare well with temperature measurements at select points where such measurements can be made using thermocouples.
NASA Astrophysics Data System (ADS)
Peng, Chong; Wang, Lun; Liao, T. Warren
2015-10-01
Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.
Simulation model of a single-stage lithium bromide-water absorption cooling unit
NASA Technical Reports Server (NTRS)
Miao, D.
1978-01-01
A computer model of a LiBr-H2O single-stage absorption machine was developed. The model, utilizing a given set of design data such as water-flow rates and inlet or outlet temperatures of these flow rates but without knowing the interior characteristics of the machine (heat transfer rates and surface areas), can be used to predict or simulate off-design performance. Results from 130 off-design cases for a given commercial machine agree with the published data within 2 percent.
Ross, Elsie Gyang; Shah, Nigam H; Dalman, Ronald L; Nead, Kevin T; Cooke, John P; Leeper, Nicholas J
2016-11-01
A key aspect of the precision medicine effort is the development of informatics tools that can analyze and interpret "big data" sets in an automated and adaptive fashion while providing accurate and actionable clinical information. The aims of this study were to develop machine learning algorithms for the identification of disease and the prognostication of mortality risk and to determine whether such models perform better than classical statistical analyses. Focusing on peripheral artery disease (PAD), patient data were derived from a prospective, observational study of 1755 patients who presented for elective coronary angiography. We employed multiple supervised machine learning algorithms and used diverse clinical, demographic, imaging, and genomic information in a hypothesis-free manner to build models that could identify patients with PAD and predict future mortality. Comparison was made to standard stepwise linear regression models. Our machine-learned models outperformed stepwise logistic regression models both for the identification of patients with PAD (area under the curve, 0.87 vs 0.76, respectively; P = .03) and for the prediction of future mortality (area under the curve, 0.76 vs 0.65, respectively; P = .10). Both machine-learned models were markedly better calibrated than the stepwise logistic regression models, thus providing more accurate disease and mortality risk estimates. Machine learning approaches can produce more accurate disease classification and prediction models. These tools may prove clinically useful for the automated identification of patients with highly morbid diseases for which aggressive risk factor management can improve outcomes. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.
NASA Astrophysics Data System (ADS)
Le, Loc Xuan
1987-09-01
A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Measurement of W + bb and a search for MSSM Higgs bosons with the CMS detector at the LHC
NASA Astrophysics Data System (ADS)
O'Connor, Alexander Pinpin
Tooling used to cure composite laminates in the aerospace and automotive industries must provide a dimensionally stable geometry throughout the thermal cycle applied during the part curing process. This requires that the Coefficient of Thermal Expansion (CTE) of the tooling materials match that of the composite being cured. The traditional tooling material for production applications is a nickel alloy. Poor machinability and high material costs increase the expense of metallic tooling made from nickel alloys such as 'Invar 36' or 'Invar 42'. Currently, metallic tooling is unable to meet the needs of applications requiring rapid affordable tooling solutions. In applications where the tooling is not required to have the durability provided by metals, such as for small area repair, an opportunity exists for non-metallic tooling materials like graphite, carbon foams, composites, or ceramics and machinable glasses. Nevertheless, efficient machining of brittle, non-metallic materials is challenging due to low ductility, porosity, and high hardness. The machining of a layup tool comprises a large portion of the final cost. Achieving maximum process economy requires optimization of the machining process in the given tooling material. Therefore, machinability of the tooling material is a critical aspect of the overall cost of the tool. In this work, three commercially available, brittle/porous, non-metallic candidate tooling materials were selected, namely: (AAC) Autoclaved Aerated Concrete, CB1100 ceramic block and Cfoam carbon foam. Machining tests were conducted in order to evaluate the machinability of these materials using end milling. Chip formation, cutting forces, cutting tool wear, machining induced damage, surface quality and surface integrity were investigated using High Speed Steel (HSS), carbide, diamond abrasive and Polycrystalline Diamond (PCD) cutting tools. Cutting forces were found to be random in magnitude, which was a result of material porosity. The abrasive nature of Cfoam produced rapid tool wear when using HSS and PCD type cutting tools. However, tool wear was not significant in AAC or CB1100 regardless of the type of cutting edge. Machining induced damage was observed in the form of macro-scale chipping and fracture in combination with micro-scale cracking. Transverse rupture test results revealed significant reductions in residual strength and damage tolerance in CB1100. In contrast, AAC and Cfoam showed no correlation between machining induced damage and a reduction in surface integrity. Cutting forces in machining were modeled for all materials. Cutting force regression models were developed based on Design of Experiment and Analysis of Variance. A mechanistic cutting force model was proposed based upon conventional end milling force models and statistical distributions of material porosity. In order to validate the model, predicted cutting forces were compared to experimental results. Predicted cutting forces agreed well with experimental measurements. Furthermore, over the range of cutting conditions tested, the proposed model was shown to have comparable predictive accuracy to empirically produced regression models; greatly reducing the number of cutting tests required to simulate cutting forces. Further, this work demonstrates a key adaptation of metallic cutting force models to brittle porous material; a vital step in the research into the machining of these materials using end milling.
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of accuracy. Therefore, this model is recommended as a useful tool for breast cancer survival prediction as well as medical decision making.
Forsythe, J Chris [Sandia Park, NM; Xavier, Patrick G [Albuquerque, NM; Abbott, Robert G [Albuquerque, NM; Brannon, Nathan G [Albuquerque, NM; Bernard, Michael L [Tijeras, NM; Speed, Ann E [Albuquerque, NM
2009-04-28
Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.
AC Loss Analysis of MgB2-Based Fully Superconducting Machines
NASA Astrophysics Data System (ADS)
Feddersen, M.; Haran, K. S.; Berg, F.
2017-12-01
Superconducting electric machines have shown potential for significant increase in power density, making them attractive for size and weight sensitive applications such as offshore wind generation, marine propulsion, and hybrid-electric aircraft propulsion. Superconductors exhibit no loss under dc conditions, though ac current and field produce considerable losses due to hysteresis, eddy currents, and coupling mechanisms. For this reason, many present machines are designed to be partially superconducting, meaning that the dc field components are superconducting while the ac armature coils are conventional conductors. Fully superconducting designs can provide increases in power density with significantly higher armature current; however, a good estimate of ac losses is required to determine the feasibility under the machines intended operating conditions. This paper aims to characterize the expected losses in a fully superconducting machine targeted towards aircraft, based on an actively-shielded, partially superconducting machine from prior work. Various factors are examined such as magnet strength, operating frequency, and machine load to produce a model for the loss in the superconducting components of the machine. This model is then used to optimize the design of the machine for minimal ac loss while maximizing power density. Important observations from the study are discussed.
Machine learning approaches to the social determinants of health in the health and retirement study.
Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David
2018-04-01
Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.
A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting
NASA Astrophysics Data System (ADS)
Kim, T.; Joo, K.; Seo, J.; Heo, J. H.
2016-12-01
Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.
Analysis of a Distributed Pulse Power System Using a Circuit Analysis Code
1979-06-01
dose rate was then integrated to give a number that could be compared with measure- ments made using thermal luminescent dosimeters ( TLD ’ s). Since...NM 8 7117 AND THE BDM CORPORATION, ALBUQUERQUE, NM 87106 Abstract A sophisticated computer code (SCEPTRE), used to analyze electronic circuits...computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash X-ray machine. This device was
Composing Data and Process Descriptions in the Design of Software Systems.
1988-05-01
accompanying ’data’ specification. So, for example, the bank account of Section 2.2.3 became ACC = open? d -- ACCIin(d) ACCA = payin? p --* ACCeosi(Ap) wdraw...w --* ACCtidraw(A,w) bal! balance(A) --+ ACCA I close -+ STOP where A has abstract type Account , with operators (that is, side-effect free functions...n accounts .................. 43 3.5 Non-deterministic merge ........ ........................... 45 4.1 Specification of a ticket machine system
2013-12-01
study of nature, just as they have in mathematics . Hence, even in our day of hyper abstract thinking , mathematics continues to be the language of...way of thinking . 2. Those successfully completing education and apprenticeship have professed a self-sacrificing commitment to serving society...overreaches. Pinker points out that the contextual school ignores the predictive reality of science and mathematics .73 This does not mean that metaphors
1978-09-12
the population. Only a socialist, planned economy can cope with such problems. However, the in- creasing complexity of the tasks faced’ by...the development of systems allowing man-machine dialogue does not decrease, but rather increase the complexity of the systems involved, simply...shifting the complexity to another sphere, where it is invisible to the human utilizing the system. Figures 5; refer- ences 3: 2 Russian, 1 Western
Proceedings of the international meeting on thermal nuclear reactor safety. Vol. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Separate abstracts are included for each of the papers presented concerning current issues in nuclear power plant safety; national programs in nuclear power plant safety; radiological source terms; probabilistic risk assessment methods and techniques; non LOCA and small-break-LOCA transients; safety goals; pressurized thermal shocks; applications of reliability and risk methods to probabilistic risk assessment; human factors and man-machine interface; and data bases and special applications.
Multi-Entity Bayesian Networks Learning in Predictive Situation Awareness
2013-06-01
evaluated on a case study from PROGNOS. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...algorithm for MEBN. The methods are evaluated on a case study from PROGNOS. 1 INTRODUCTION Over the past two decades, machine learning has...the MFrag of the child node. Lastly, in the third For-Loop, for all resident nodes in the MTheory, LPDs are generated by MLE. 5 CASE STUDY
An adaptive process-based cloud infrastructure for space situational awareness applications
NASA Astrophysics Data System (ADS)
Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce
2014-06-01
Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.
NASA Astrophysics Data System (ADS)
Jain, Madhu; Meena, Rakesh Kumar
2018-03-01
Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.
NASA Astrophysics Data System (ADS)
Utegulov, B. B.; Utegulov, A. B.; Meiramova, S.
2018-02-01
The paper proposes the development of a self-learning machine for creating models of microprocessor-based single-phase ground fault protection devices in networks with an isolated neutral voltage higher than 1000 V. Development of a self-learning machine for creating models of microprocessor-based single-phase earth fault protection devices in networks with an isolated neutral voltage higher than 1000 V. allows to effectively implement mathematical models of automatic change of protection settings. Single-phase earth fault protection devices.
Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu
2018-05-01
Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution outperformed all other models (P < 0.001) with the mean absolute discrepancy of 0.62% and maximum discrepancy of 3.17% between the measured and predicted OFs. The OFs showed a small dependence on gantry angle for small and deep options while they were constant for large options. The OF decreased by 3%-4% as the field radius was reduced to 2.5 cm. Machine learning methods can be used to predict OF for double-scatter proton machines with greater prediction accuracy than the most popular semi-empirical prediction model. By incorporating the gantry angle dependence and field size dependence, the machine learning-based methods can be used for a sanity check of OF measurements and bears the potential to eliminate the time-consuming patient-specific OF measurements. © 2018 American Association of Physicists in Medicine.
Minimal universal quantum heat machine.
Gelbwaser-Klimovsky, D; Alicki, R; Kurizki, G
2013-01-01
In traditional thermodynamics the Carnot cycle yields the ideal performance bound of heat engines and refrigerators. We propose and analyze a minimal model of a heat machine that can play a similar role in quantum regimes. The minimal model consists of a single two-level system with periodically modulated energy splitting that is permanently, weakly, coupled to two spectrally separated heat baths at different temperatures. The equation of motion allows us to compute the stationary power and heat currents in the machine consistent with the second law of thermodynamics. This dual-purpose machine can act as either an engine or a refrigerator (heat pump) depending on the modulation rate. In both modes of operation, the maximal Carnot efficiency is reached at zero power. We study the conditions for finite-time optimal performance for several variants of the model. Possible realizations of the model are discussed.
NASA Astrophysics Data System (ADS)
Samadhi, TMAA; Sumihartati, Atin
2016-02-01
The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..
NASA Astrophysics Data System (ADS)
Szymański, Zygmunt
2015-03-01
In the paper present's an analysis of suitableness an application of compact and hybrid drive system in hoisting machine. In the paper presented the review of constructional solutions of hoisting machines drive system, driving with AC and DC motor. In the paper presented conception of modern, energy sparing hoisting machine supply system, composed with compact motor, an supplied with transistor or thyristor converter supply system, and intelligent control system composed with multilevel microprocessor controller. In the paper present's also analysis of suitableness application an selected method of artificial intelligent in hoisting machine control system, automation system, and modern diagnostic system. In the paper one limited to analysis of: fuzzy logic method, genetic algorithms method, and modern neural net II and III generation. That method enables realization of complex control algorithms of hosting machine with insurance of energy sparing exploitation conditions, monitoring of exploitation parameters, and prediction diagnostic of hoisting machine technical state, minimization a number of failure states. In the paper present's a conception of control and diagnostic system of the hoisting machine based on fuzzy logic neural set control. In the chapter presented also a selected control algorithms and results of computer simulations realized for particular mathematical models of hoisting machine. Results of theoretical investigation were partly verified in laboratory and industrial experiments.
State machine analysis of sensor data from dynamic processes
Cook, William R.; Brabson, John M.; Deland, Sharon M.
2003-12-23
A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.
77 FR 61307 - New Postal Product
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-09
...: Transfer Mail Processing Cost Model for Machinable and Irregular Standard Mail Parcels to the Mail Processing Cost Model for Parcel Select/Parcel Return Service. The Postal Service proposes to move the machinable and irregular cost worksheets contained in the Standard Mail parcel mail processing cost model to...
Trends and developments in industrial machine vision: 2013
NASA Astrophysics Data System (ADS)
Niel, Kurt; Heinzl, Christoph
2014-03-01
When following current advancements and implementations in the field of machine vision there seems to be no borders for future developments: Calculating power constantly increases, and new ideas are spreading and previously challenging approaches are introduced in to mass market. Within the past decades these advances have had dramatic impacts on our lives. Consumer electronics, e.g. computers or telephones, which once occupied large volumes, now fit in the palm of a hand. To note just a few examples e.g. face recognition was adopted by the consumer market, 3D capturing became cheap, due to the huge community SW-coding got easier using sophisticated development platforms. However, still there is a remaining gap between consumer and industrial applications. While the first ones have to be entertaining, the second have to be reliable. Recent studies (e.g. VDMA [1], Germany) show a moderately increasing market for machine vision in industry. Asking industry regarding their needs the main challenges for industrial machine vision are simple usage and reliability for the process, quick support, full automation, self/easy adjustment at changing process parameters, "forget it in the line". Furthermore a big challenge is to support quality control: Nowadays the operator has to accurately define the tested features for checking the probes. There is an upcoming development also to let automated machine vision applications find out essential parameters in a more abstract level (top down). In this work we focus on three current and future topics for industrial machine vision: Metrology supporting automation, quality control (inline/atline/offline) as well as visualization and analysis of datasets with steadily growing sizes. Finally the general trend of the pixel orientated towards object orientated evaluation is addressed. We do not directly address the field of robotics taking advances from machine vision. This is actually a fast changing area which is worth an own contribution.
Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.
Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose
2018-02-22
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.
Improving data retrieval quality: Evidence based medicine perspective.
Kamalov, M; Dobrynin, V; Balykina, J; Kolbin, A; Verbitskaya, E; Kasimova, M
2015-01-01
The actively developing approach in modern medicine is the approach focused on principles of evidence-based medicine. The assessment of quality and reliability of studies is needed. However, in some cases studies corresponding to the first level of evidence may contain errors in randomized control trials (RCTs). Solution of the problem is the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system. Studies both in the fields of medicine and information retrieval are conducted for developing search engines for the MEDLINE database [1]; combined techniques for summarization and information retrieval targeted to solving problems of finding the best medication based on the levels of evidence are being developed [2]. Based on the relevance and demand for studies both in the field of medicine and information retrieval, it was decided to start the development of a search engine for the MEDLINE database search on the basis of the Saint-Petersburg State University with the support of Pavlov First Saint-Petersburg State Medical University and Tashkent Institute of Postgraduate Medical Education. Novelty and value of the proposed system are characterized by the use of ranking method of relevant abstracts. It is suggested that the system will be able to perform ranking based on studies level of evidence and to apply GRADE criteria for system evaluation. The assigned task falls within the domain of information retrieval and machine learning. Based on the results of implementation from previous work [3], in which the main goal was to cluster abstracts from MEDLINE database by subtypes of medical interventions, a set of algorithms for clustering in this study was selected: K-means, K-means ++, EM from the sklearn (http://scikit-learn.org) and WEKA (http://www.cs.waikato.ac.nz/~ml/weka/) libraries, together with the methods of Latent Semantic Analysis (LSA) [4] choosing the first 210 facts and the model "bag of words" [5] to represent clustered documents. During the process of abstracts classification, few algorithms were tested including: Complement Naive Bayes [6], Sequential Minimal Optimization (SMO) [7] and non linear SVM from the WEKA library. The first step of this study was to markup abstracts of articles from the MEDLINE by containing and not containing a medical intervention. For this purpose, based on our previous work [8] a web-crawler was modified to perform the necessary markuping. The next step was to evaluate the clustering algorithms at the markup abstracts. As a result of clustering abstracts by two groups, when applying the LSA and choosing first 210 facts, the following results were obtained:1) K-means: Purity = 0,5598, Normalized Entropy = 0.5994;2)K-means ++: Purity = 0,6743, Normalized Entropy = 0.4996;3)EM: Purity = 0,5443, Normalized Entropy = 0.6344.When applying the model "bag of words":1)K-means: Purity = 0,5134, Normalized Entropy = 0.6254;2)K-means ++: Purity = 0,5645, Normalized Entropy = 0.5299;3)EM: Purity = 0,5247, Normalized Entropy = 0.6345.Then, studies which contain medical intervention have been considered and classified by the subtypes of medical interventions. At the process of classification abstracts by subtypes of medical interventions, abstracts were presented as a "bag of words" model with the removal of stop words. 1)Complement Naive Bayes: macro F-measure = 0.6934, micro F-measure = 0.7234;2)Sequantial Minimal Optimization: macro F-measure = 0.6543, micro F-measure = 0.7042;3)Non linear SVM: macro F-measure = 0.6835, micro F-measure = 0.7642. Based on the results of computational experiments, the best results of abstract clustering by containing and not containing medical intervention were obtained using the K-Means ++ algorithm together with LSA, choosing the first 210 facts. The quality of classification abstracts by subtypes of medical interventions value for existing ones [8] has been improved using non linear SVM algorithm, with "bag of words" model and the removal of stop words. The results of clustering obtained in this study will help in grouping abstracts by levels of evidence, using the classification by subtypes of medical interventions and it will be possible to extract information from the abstracts on specific types of interventions.
Recent R&D status for 70 MW class superconducting generators in the Super-GM project
NASA Astrophysics Data System (ADS)
Ageta, Takasuke
2000-05-01
Three types of 70 MW class superconducting generators called model machines have been developed to establish basic technologies for a pilot machine. The series of on-site verification tests was completed in June 1999. The world's highest generator output (79 MW), the world's longest continuous operation (1500 hours) and other excellent results were obtained. The model machine was connected to a commercial power grid and fundamental data were collected for future utilization. It is expected that fundamental technologies on design and manufacture required for a 200 MW class pilot machine are established.
Progress in computational toxicology.
Ekins, Sean
2014-01-01
Computational methods have been widely applied to toxicology across pharmaceutical, consumer product and environmental fields over the past decade. Progress in computational toxicology is now reviewed. A literature review was performed on computational models for hepatotoxicity (e.g. for drug-induced liver injury (DILI)), cardiotoxicity, renal toxicity and genotoxicity. In addition various publications have been highlighted that use machine learning methods. Several computational toxicology model datasets from past publications were used to compare Bayesian and Support Vector Machine (SVM) learning methods. The increasing amounts of data for defined toxicology endpoints have enabled machine learning models that have been increasingly used for predictions. It is shown that across many different models Bayesian and SVM perform similarly based on cross validation data. Considerable progress has been made in computational toxicology in a decade in both model development and availability of larger scale or 'big data' models. The future efforts in toxicology data generation will likely provide us with hundreds of thousands of compounds that are readily accessible for machine learning models. These models will cover relevant chemistry space for pharmaceutical, consumer product and environmental applications. Copyright © 2013 Elsevier Inc. All rights reserved.
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, William Monford
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines
Wood, William Monford
2018-02-07
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
NASA Astrophysics Data System (ADS)
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv
2007-04-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines
NASA Astrophysics Data System (ADS)
Wood, Wm M.
2018-02-01
A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.
Adaptive machine and its thermodynamic costs
NASA Astrophysics Data System (ADS)
Allahverdyan, Armen E.; Wang, Q. A.
2013-03-01
We study the minimal thermodynamically consistent model for an adaptive machine that transfers particles from a higher chemical potential reservoir to a lower one. This model describes essentials of the inhomogeneous catalysis. It is supposed to function with the maximal current under uncertain chemical potentials: if they change, the machine tunes its own structure fitting it to the maximal current under new conditions. This adaptation is possible under two limitations: (i) The degree of freedom that controls the machine's structure has to have a stored energy (described via a negative temperature). The origin of this result is traced back to the Le Chatelier principle. (ii) The machine has to malfunction at a constant environment due to structural fluctuations, whose relative magnitude is controlled solely by the stored energy. We argue that several features of the adaptive machine are similar to those of living organisms (energy storage, aging).
Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines
NASA Astrophysics Data System (ADS)
Sakaue, T.; Kapral, R.; Mikhailov, A. S.
2010-06-01
Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.
A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)
2008-11-01
retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach
Temperature Measurement and Numerical Prediction in Machining Inconel 718.
Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-06-30
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W
2015-08-01
Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
NASA Astrophysics Data System (ADS)
Bucak, T.; Trolle, D.; Andersen, H. E.; Thodsen, H.; Erdoğan, Ş.; Levi, E. E.; Filiz, N.; Jeppesen, E.; Beklioğlu, M.
2016-12-01
Inter- and intra-annual water level fluctuations and change in water flow regime are intrinsic characteristics of Mediterranean lakes. However, considering the climate change projections for the water-limited Mediterranean region where potential evapotranspiration exceeds precipitation and with increased air temperatures and decreased precipitation, more dramatic water level declines in lakes and severe water scarcity problems are expected to occur in the future. Our study lake, Lake Beyşehir, the largest freshwater lake in the Mediterranean basin, is - like other Mediterranean lakes - under pressure due to water abstraction for irrigated crop farming and climatic changes, and integrated water level management is therefore required. We used an integrated modeling approach to predict the future lake water level of Lake Beyşehir in response to the future changes in both climate and, potentially, land use by linking the catchment model Soil and Water Assessment Tool (SWAT) with a Support Vector Machine Regression model (ɛ-SVR). We found that climate change projections caused enhanced potential evapotranspiration and reduced total runoff, whereas the effects of various land use scenarios within the catchment were comparatively minor. In all climate scenarios applied in the ɛ-SVR model, changes in hydrological processes caused a water level reduction, predicting that the lake may dry out already in the 2040s with the current outflow regulation considering the most pessimistic scenario. Based on model runs with optimum outflow management, a 9-60% reduction in outflow withdrawal is needed to prevent the lake from drying out by the end of this century. Our results indicate that shallow Mediterranean lakes may face a severe risk of drying out and loss of ecosystem value in near future if the current intense water abstraction is maintained. Therefore, we conclude that outflow management in water-limited regions in a warmer and drier future and sustainable use of water sources are vitally important to sustain lake ecosystems and their ecosystem services.
Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.
Development of a neural net paradigm that predicts simulator sickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, G.O.
1993-03-01
A disease exists that affects pilots and aircrew members who use Navy Operational Flight Training Systems. This malady, commonly referred to as simulator sickness and whose symptomatology closely aligns with that of motion sickness, can compromise the use of these systems because of a reduced utilization factor, negative transfer of training, and reduction in combat readiness. A report is submitted that develops an artificial neural network (ANN) and behavioral model that predicts the onset and level of simulator sickness in the pilots and aircrews who sue these systems. It is proposed that the paradigm could be implemented in real timemore » as a biofeedback monitor to reduce the risk to users of these systems. The model captures the neurophysiological impact of use (human-machine interaction) by developing a structure that maps the associative and nonassociative behavioral patterns (learned expectations) and vestibular (otolith and semicircular canals of the inner ear) and tactile interaction, derived from system acceleration profiles, onto an abstract space that predicts simulator sickness for a given training flight.« less
NASA Astrophysics Data System (ADS)
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We are developing a brain-machine interface (BMI) called “RatCar," a small vehicle controlled by the neural signals of a rat's brain. An unconfined adult rat with a set of bundled neural electrodes in the brain rides on the vehicle. Each bundle consists of four tungsten wires isolated with parylene polymer. These bundles were implanted in the primary motor and premotor cortices in both hemispheres of the brain. In this paper, methods and results for estimating locomotion speed and directional changes are described. Neural signals were recorded as the rat moved in a straight line and as it changed direction in a curve. Spike-like waveforms were then detected and classified into several clusters to calculate a firing rate for each neuron. The actual locomotion velocity and directional changes of the rat were recorded concurrently. Finally, the locomotion states were correlated with the neural firing rates using a simple linear model. As a result, the abstract estimation of the locomotion velocity and directional changes were achieved.
Emerald: an object-based language for distributed programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, N.C.
1987-01-01
Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
Will the digital computer transform classical mathematics?
Rotman, Brian
2003-08-15
Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.
Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-09-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reprint of: Simulation Platform: a cloud-based online simulation environment.
Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro
2011-11-01
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.
Analytical calculation of vibrations of electromagnetic origin in electrical machines
NASA Astrophysics Data System (ADS)
McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka
2018-01-01
Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.
NASA Astrophysics Data System (ADS)
Niswatin, C.; Latief, M. A.; Suharyadi, S.
2018-02-01
This research aims to uncover the fact about engineering students in dealing with composing abstracts for their final projects. The research applies a descriptive qualitative quantitative design. The data were collected through questioners involving 104 engineering students, including the alumni at Politeknik Kota Malang, Indonesia. Furthermore, interviews were carried out to explain the details where necessary to support the primary data. It is found that the common problems faced by engineering students include 1) combining words into sentences, 2) identifying the most appropriate technical terms in engineering, and 3) applying grammar in context. To cope with those difficulties they demanded translation application machines, supported by peer-proofreaders. In addition, they considerably engaged personal tutoring with the lectures more than three times.
Context in Models of Human-Machine Systems
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Null, Cynthia H. (Technical Monitor)
1998-01-01
All human-machine systems models represent context. This paper proposes a theory of context through which models may be usefully related and integrated for design. The paper presents examples of context representation in various models, describes an application to developing models for the Crew Activity Tracking System (CATS), and advances context as a foundation for integrated design of complex dynamic systems.
SWIFT-Review: a text-mining workbench for systematic review.
Howard, Brian E; Phillips, Jason; Miller, Kyle; Tandon, Arpit; Mav, Deepak; Shah, Mihir R; Holmgren, Stephanie; Pelch, Katherine E; Walker, Vickie; Rooney, Andrew A; Macleod, Malcolm; Shah, Ruchir R; Thayer, Kristina
2016-05-23
There is growing interest in using machine learning approaches to priority rank studies and reduce human burden in screening literature when conducting systematic reviews. In addition, identifying addressable questions during the problem formulation phase of systematic review can be challenging, especially for topics having a large literature base. Here, we assess the performance of the SWIFT-Review priority ranking algorithm for identifying studies relevant to a given research question. We also explore the use of SWIFT-Review during problem formulation to identify, categorize, and visualize research areas that are data rich/data poor within a large literature corpus. Twenty case studies, including 15 public data sets, representing a range of complexity and size, were used to assess the priority ranking performance of SWIFT-Review. For each study, seed sets of manually annotated included and excluded titles and abstracts were used for machine training. The remaining references were then ranked for relevance using an algorithm that considers term frequency and latent Dirichlet allocation (LDA) topic modeling. This ranking was evaluated with respect to (1) the number of studies screened in order to identify 95 % of known relevant studies and (2) the "Work Saved over Sampling" (WSS) performance metric. To assess SWIFT-Review for use in problem formulation, PubMed literature search results for 171 chemicals implicated as EDCs were uploaded into SWIFT-Review (264,588 studies) and categorized based on evidence stream and health outcome. Patterns of search results were surveyed and visualized using a variety of interactive graphics. Compared with the reported performance of other tools using the same datasets, the SWIFT-Review ranking procedure obtained the highest scores on 11 out of 15 of the public datasets. Overall, these results suggest that using machine learning to triage documents for screening has the potential to save, on average, more than 50 % of the screening effort ordinarily required when using un-ordered document lists. In addition, the tagging and annotation capabilities of SWIFT-Review can be useful during the activities of scoping and problem formulation. Text-mining and machine learning software such as SWIFT-Review can be valuable tools to reduce the human screening burden and assist in problem formulation.
Machine learning molecular dynamics for the simulation of infrared spectra.
Gastegger, Michael; Behler, Jörg; Marquetand, Philipp
2017-10-01
Machine learning has emerged as an invaluable tool in many research areas. In the present work, we harness this power to predict highly accurate molecular infrared spectra with unprecedented computational efficiency. To account for vibrational anharmonic and dynamical effects - typically neglected by conventional quantum chemistry approaches - we base our machine learning strategy on ab initio molecular dynamics simulations. While these simulations are usually extremely time consuming even for small molecules, we overcome these limitations by leveraging the power of a variety of machine learning techniques, not only accelerating simulations by several orders of magnitude, but also greatly extending the size of systems that can be treated. To this end, we develop a molecular dipole moment model based on environment dependent neural network charges and combine it with the neural network potential approach of Behler and Parrinello. Contrary to the prevalent big data philosophy, we are able to obtain very accurate machine learning models for the prediction of infrared spectra based on only a few hundreds of electronic structure reference points. This is made possible through the use of molecular forces during neural network potential training and the introduction of a fully automated sampling scheme. We demonstrate the power of our machine learning approach by applying it to model the infrared spectra of a methanol molecule, n -alkanes containing up to 200 atoms and the protonated alanine tripeptide, which at the same time represents the first application of machine learning techniques to simulate the dynamics of a peptide. In all of these case studies we find an excellent agreement between the infrared spectra predicted via machine learning models and the respective theoretical and experimental spectra.
Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L
2017-08-29
To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. We are currently writing Auto-ML's design document. We intend to finish our study by around the year 2022. Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. ©Gang Luo, Bryan L Stone, Michael D Johnson, Peter Tarczy-Hornoch, Adam B Wilcox, Sean D Mooney, Xiaoming Sheng, Peter J Haug, Flory L Nkoy. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 29.08.2017.
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
Huebner, Philip A.; Willits, Jon A.
2018-01-01
Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary “deep learning” approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0–3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system. PMID:29520243
PredicT-ML: a tool for automating machine learning model building with big clinical data.
Luo, Gang
2016-01-01
Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.
Omics approaches to individual variation: modeling networks and the virtual patient.
Lehrach, Hans
2016-09-01
Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment-a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on "virtual patient" models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, "virtual patient/in-silico self" models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as "guardian angels" accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness.
Omics approaches to individual variation: modeling networks and the virtual patient
Lehrach, Hans
2016-01-01
Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment—a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on “virtual patient” models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, “virtual patient/in-silico self” models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as “guardian angels” accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness. PMID:27757060
NASA Astrophysics Data System (ADS)
Kumano, Teruhisa
As known well, two of the fundamental processes which give rise to voltage collapse in power systems are the on load tap changers of transformers and dynamic characteristics of loads such as induction machines. It has been well established that, comparing among these two, the former makes slower collapse while the latter makes faster. However, in realistic situations, the load level of each induction machine is not uniform and it is well expected that only a part of loads collapses first, followed by collapse process of each load which did not go into instability during the preceding collapses. In such situations the over all equivalent collapse behavior viewed from bulk transmission level becomes somewhat different from the simple collapse driven by one aggregated induction machine. This paper studies the process of cascaded voltage collapse among many induction machines by time simulation, where load distribution on a feeder line is modeled by several hundreds of induction machines and static impedance loads. It is shown that in some cases voltage collapse really cascades among induction machines, where the macroscopic load dynamics viewed from upper voltage level makes slower collapse than expected by the aggregated load model. Also shown is the effects of machine protection of induction machines, which also makes slower collapse.
Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data
Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.
2015-01-01
Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647
Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.
Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S
2016-02-01
Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.
Calculation of skiving cutter blade
NASA Astrophysics Data System (ADS)
Xu, Lei; Lao, Qicheng; Shang, Zhiyi
2018-05-01
The gear skiving method is a kind of gear machining technology with high efficiency and high precision. According to the method of gear machining, a method for calculating the blade of skiving cutter in machining an involute gear is proposed. Based on the principle of meshing gear and the kinematic relationship between the machined flank and the gear skiving, the mathematical model of skiving for machining the internal gear is built and the gear tooth surface is obtained by solving the meshing equation. The mathematical model of the gear blade curve of the skiving cutter is obtained by choosing the proper rake face and the cutter tooth surface for intersection. Through the analysis of the simulation of the skiving gear, the feasibility and correctness of the skiving cutter blade design are verified.
Design and finite element analysis of micro punch CNC machine modeling for medical devices
NASA Astrophysics Data System (ADS)
Pranoto, Sigiet Haryo; Mahardika, Muslim
2018-03-01
Research on micromanufacturing has been conducted. Miniaturization and weight reduction of various industrial products continue to be developed, machines with high accuracy and good quality of machining results are needed recently. This research includes design and simulation of Micro Punch CNC Machine using Abaqus with pneumatic system. This article concern of modeling simulation of punching miniplate titanium with 0.6 MPa of pressure and 500 µm of thickness. This study explaining von misses stress, safety factor and displacement analysis while the machine had the load of punching. The result gives the reaction forced of punching is 0.5 MPa on punch tip and maximum displacement is 3.237 × 10-1 mm. The safety factor is over than 12, and considered it safe for manufacturing process.
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
Reversibility in Quantum Models of Stochastic Processes
NASA Astrophysics Data System (ADS)
Gier, David; Crutchfield, James; Mahoney, John; James, Ryan
Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
Uses of the Westrup brush machine
Jill Barbour
2002-01-01
The Westrup brush machine can be used as the first step in the conditioning process of seeds. Even though there are various sizes of the machine, only the laboratory model (LA-H) is described. The machine is designed to separate seed from pods or flowers, dewing tree seed, remove appendages or hairs from seed, split twin seed, de-lint cotton seed, scarify hard coated...
A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM
NASA Astrophysics Data System (ADS)
Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan
2018-03-01
In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.
Modeling the Car Crash Crisis Management System Using HiLA
NASA Astrophysics Data System (ADS)
Hölzl, Matthias; Knapp, Alexander; Zhang, Gefei
An aspect-oriented modeling approach to the Car Crash Crisis Management System (CCCMS) using the High-Level Aspect (HiLA) language is described. HiLA is a language for expressing aspects for UML static structures and UML state machines. In particular, HiLA supports both a static graph transformational and a dynamic approach of applying aspects. Furthermore, it facilitates methodologically turning use case descriptions into state machines: for each main success scenario, a base state machine is developed; all extensions to this main success scenario are covered by aspects. Overall, the static structure of the CCCMS is modeled in 43 classes, the main success scenarios in 13 base machines, the use case extensions in 47 static and 31 dynamic aspects, most of which are instantiations of simple aspect templates.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P
2016-11-29
Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p < 0.01), average early diastolic tissue velocity (e') (p < 0.01), and strain (p = 0.04). Because ATH were younger, adjusted analysis was undertaken in younger HCM patients and compared with ATH with left ventricular wall thickness >13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Producibility and Serviceability of Kevlar-49 Structures Made on Hot Layup Tools
1975-05-01
changes for a typical airframe composite part and established improved machining practices for Kevlar-49. Some of the more signifi- cant conclusions...reverse side if necessary 8nd identify by block number) Composite Materials Inlet Fairing Helicopters Hot Layup Tools (HLT) Kevlar -49 20. ABSTRACT...CLASSlFlCATlON OF THIS PAGE(Whm Data Bnlorod) 0 Demonstrate the low cost aspects of using Hot Layup Tools (HLT) to fabricate composite structures. a
Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element
1993-05-01
2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE . SECURITY CLASSIFICATION 18. SECURITY
Automated Virtual Machine Introspection for Host-Based Intrusion Detection
2009-03-01
boxes represent the code and data sections of each process in memory with arrows representing hooks planted by malware to jump to the malware code...a useful indication of intrusion, it is also susceptible to mimicry and concurrency attacks [Pro03,Wat07]. Additionally, most research abstracts away...sequence of system calls that accomplishes his or her intent [WS02]. This “ mimicry attack” takes advantage of the fact that many HIDS discard the pa
Belekar, Vilas; Lingineni, Karthik; Garg, Prabha
2015-01-01
The breast cancer resistant protein (BCRP) is an important transporter and its inhibitors play an important role in cancer treatment by improving the oral bioavailability as well as blood brain barrier (BBB) permeability of anticancer drugs. In this work, a computational model was developed to predict the compounds as BCRP inhibitors or non-inhibitors. Various machine learning approaches like, support vector machine (SVM), k-nearest neighbor (k-NN) and artificial neural network (ANN) were used to develop the models. The Matthews correlation coefficients (MCC) of developed models using ANN, k-NN and SVM are 0.67, 0.71 and 0.77, and prediction accuracies are 85.2%, 88.3% and 90.8% respectively. The developed models were tested with a test set of 99 compounds and further validated with external set of 98 compounds. Distribution plot analysis and various machine learning models were also developed based on druglikeness descriptors. Applicability domain is used to check the prediction reliability of the new molecules.
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
NASA Astrophysics Data System (ADS)
Okokpujie, Imhade Princess; Ikumapayi, Omolayo M.; Okonkwo, Ugochukwu C.; Salawu, Enesi Y.; Afolalu, Sunday A.; Dirisu, Joseph O.; Nwoke, Obinna N.; Ajayi, Oluseyi O.
2017-12-01
In recent machining operation, tool life is one of the most demanding tasks in production process, especially in the automotive industry. The aim of this paper is to study tool wear on HSS in end milling of aluminium 6061 alloy. The experiments were carried out to investigate tool wear with the machined parameters and to developed mathematical model using response surface methodology. The various machining parameters selected for the experiment are spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r). The experiment was designed using central composite design (CCD) in which 31 samples were run on SIEG 3/10/0010 CNC end milling machine. After each experiment the cutting tool was measured using scanning electron microscope (SEM). The obtained optimum machining parameter combination are spindle speed of 2500 rpm, feed rate of 200 mm/min, axial depth of cut of 20 mm, and radial depth of cut 1.0mm was found out to achieved the minimum tool wear as 0.213 mm. The mathematical model developed predicted the tool wear with 99.7% which is within the acceptable accuracy range for tool wear prediction.
NASA Astrophysics Data System (ADS)
Nesvold, E. R.; Erasmus, N.; Greenberg, A.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.
2017-02-01
We present a machine learning model that can predict which asteroid deflection technology would be most effective, given the likely population of impactors. Our model can help policy and funding agencies prioritize technology development.
Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
Vane Pump Casing Machining of Dumpling Machine Based on CAD/CAM
NASA Astrophysics Data System (ADS)
Huang, Yusen; Li, Shilong; Li, Chengcheng; Yang, Zhen
Automatic dumpling forming machine is also called dumpling machine, which makes dumplings through mechanical motions. This paper adopts the stuffing delivery mechanism featuring the improved and specially-designed vane pump casing, which can contribute to the formation of dumplings. Its 3D modeling in Pro/E software, machining process planning, milling path optimization, simulation based on UG and compiling post program were introduced and verified. The results indicated that adoption of CAD/CAM offers firms the potential to pursue new innovative strategies.
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
Characterizing the (Perceived) Newsworthiness of Health Science Articles: A Data-Driven Approach.
Zhang, Ye; Willis, Erin; Paul, Michael J; Elhadad, Noémie; Wallace, Byron C
2016-09-22
Health science findings are primarily disseminated through manuscript publications. Information subsidies are used to communicate newsworthy findings to journalists in an effort to earn mass media coverage and further disseminate health science research to mass audiences. Journal editors and news journalists then select which news stories receive coverage and thus public attention. This study aims to identify attributes of published health science articles that correlate with (1) journal editor issuance of press releases and (2) mainstream media coverage. We constructed four novel datasets to identify factors that correlate with press release issuance and media coverage. These corpora include thousands of published articles, subsets of which received press release or mainstream media coverage. We used statistical machine learning methods to identify correlations between words in the science abstracts and press release issuance and media coverage. Further, we used a topic modeling-based machine learning approach to uncover latent topics predictive of the perceived newsworthiness of science articles. Both press release issuance for, and media coverage of, health science articles are predictable from corresponding journal article content. For the former task, we achieved average areas under the curve (AUCs) of 0.666 (SD 0.019) and 0.882 (SD 0.018) on two separate datasets, comprising 3024 and 10,760 articles, respectively. For the latter task, models realized mean AUCs of 0.591 (SD 0.044) and 0.783 (SD 0.022) on two datasets-in this case containing 422 and 28,910 pairs, respectively. We reported most-predictive words and topics for press release or news coverage. We have presented a novel data-driven characterization of content that renders health science "newsworthy." The analysis provides new insights into the news coverage selection process. For example, it appears epidemiological papers concerning common behaviors (eg, alcohol consumption) tend to receive media attention.
Rosen's (M,R) system in Unified Modelling Language.
Zhang, Ling; Williams, Richard A; Gatherer, Derek
2016-01-01
Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly non-computable on a Turing machine. If (M,R) is truly non-computable, there are serious implications for the modelling of large biological networks in computer software. A body of work has now accumulated addressing Rosen's claim concerning (M,R) by attempting to instantiate it in various software systems. However, a conclusive refutation has remained elusive, principally since none of the attempts to date have unambiguously avoided the critique that they have altered the properties of (M,R) in the coding process, producing merely approximate simulations of (M,R) rather than true computational models. In this paper, we use the Unified Modelling Language (UML), a diagrammatic notation standard, to express (M,R) as a system of objects having attributes, functions and relations. We believe that this instantiates (M,R) in such a way than none of the original properties of the system are corrupted in the process. Crucially, we demonstrate that (M,R) as classically represented in the relational biology literature is implicitly a UML communication diagram. Furthermore, since UML is formally compatible with object-oriented computing languages, instantiation of (M,R) in UML strongly implies its computability in object-oriented coding languages. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork
Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen
2018-01-01
Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285
NASA Astrophysics Data System (ADS)
Brereton, Margot Felicity
A series of short engineering exercises and design projects was created to help students learn to apply abstract knowledge to physical experiences with hardware. The exercises involved designing machines from kits of materials and dissecting and analyzing familiar household products. Students worked in teams. During the activities students brought their knowledge of engineering fundamentals to bear. Videotape analysis was used to identify and characterize the ways in which hardware contributed to learning fundamental concepts. Structural and qualitative analyses of videotaped activities were undertaken. Structural analysis involved counting the references to theory and hardware and the extent of interleaving of references in activity. The analysis found that there was much more discussion linking fundamental concepts to hardware in some activities than in others. The analysis showed that the interleaving of references to theory and hardware in activity is observable and quantifiable. Qualitative analysis was used to investigate the dialog linking concepts and hardware. Students were found to advance their designs and their understanding of engineering fundamentals through a negotiation process in which they pitted abstract concepts against hardware behavior. Through this process students sorted out theoretical assumptions and causal relations. In addition they discovered design assumptions, functional connections and physical embodiments of abstract concepts in hardware, developing a repertoire of familiar hardware components and machines. Hardware was found to be integral to learning, affecting the course of inquiry and the dynamics of group interaction. Several case studies are presented to illustrate the processes at work. The research illustrates the importance of working across the boundary between abstractions and experiences with hardware in order to learn engineering and physical sciences. The research findings are: (a) the negotiation process by which students discover fundamental concepts in hardware (and three central causes of negotiation breakdown); (b) a characterization of the ways that material systems contribute to learning activities, (the seven roles of hardware in learning); (c) the characteristics of activities that support discovering fundamental concepts in hardware (plus several engineering exercises); (d) a research methodology to examine how students learn in practice.
Machine learning in cardiovascular medicine: are we there yet?
Shameer, Khader; Johnson, Kipp W; Glicksberg, Benjamin S; Dudley, Joel T; Sengupta, Partho P
2018-01-19
Artificial intelligence (AI) broadly refers to analytical algorithms that iteratively learn from data, allowing computers to find hidden insights without being explicitly programmed where to look. These include a family of operations encompassing several terms like machine learning, cognitive learning, deep learning and reinforcement learning-based methods that can be used to integrate and interpret complex biomedical and healthcare data in scenarios where traditional statistical methods may not be able to perform. In this review article, we discuss the basics of machine learning algorithms and what potential data sources exist; evaluate the need for machine learning; and examine the potential limitations and challenges of implementing machine in the context of cardiovascular medicine. The most promising avenues for AI in medicine are the development of automated risk prediction algorithms which can be used to guide clinical care; use of unsupervised learning techniques to more precisely phenotype complex disease; and the implementation of reinforcement learning algorithms to intelligently augment healthcare providers. The utility of a machine learning-based predictive model will depend on factors including data heterogeneity, data depth, data breadth, nature of modelling task, choice of machine learning and feature selection algorithms, and orthogonal evidence. A critical understanding of the strength and limitations of various methods and tasks amenable to machine learning is vital. By leveraging the growing corpus of big data in medicine, we detail pathways by which machine learning may facilitate optimal development of patient-specific models for improving diagnoses, intervention and outcome in cardiovascular medicine. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
Temperature Measurement and Numerical Prediction in Machining Inconel 718
Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-01-01
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312
A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemphill, Geralyn M.
Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to bemore » an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.« less
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2017-01-01
This study proposes an adaptive-learning-based method for machine faulty detection and health degradation monitoring. The kernel of the proposed method is an "evolving" model that uses an unsupervised online learning scheme, in which an adaptive hidden Markov model (AHMM) is used for online learning the dynamic health changes of machines in their full life. A statistical index is developed for recognizing the new health states in the machines. Those new health states are then described online by adding of new hidden states in AHMM. Furthermore, the health degradations in machines are quantified online by an AHMM-based health index (HI) that measures the similarity between two density distributions that describe the historic and current health states, respectively. When necessary, the proposed method characterizes the distinct operating modes of the machine and can learn online both abrupt as well as gradual health changes. Our method overcomes some drawbacks of the HIs (e.g., relatively low comprehensibility and applicability) based on fixed monitoring models constructed in the offline phase. Results from its application in a bearing life test reveal that the proposed method is effective in online detection and adaptive assessment of machine health degradation. This study provides a useful guide for developing a condition-based maintenance (CBM) system that uses an online learning method without considerable human intervention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less
Exploration of Machine Learning Approaches to Predict Pavement Performance
DOT National Transportation Integrated Search
2018-03-23
Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...