A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Probabilistic machine learning and artificial intelligence.
Ghahramani, Zoubin
2015-05-28
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Probabilistic machine learning and artificial intelligence
NASA Astrophysics Data System (ADS)
Ghahramani, Zoubin
2015-05-01
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET.
N Ahmed, Malik; Abdullah, Abdul Hanan; Kaiwartya, Omprakash
2016-01-01
Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs) significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS) and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM) for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV) protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks.
FSM-F: Finite State Machine Based Framework for Denial of Service and Intrusion Detection in MANET
N. Ahmed, Malik; Abdullah, Abdul Hanan; Kaiwartya, Omprakash
2016-01-01
Due to the continuous advancements in wireless communication in terms of quality of communication and affordability of the technology, the application area of Mobile Adhoc Networks (MANETs) significantly growing particularly in military and disaster management. Considering the sensitivity of the application areas, security in terms of detection of Denial of Service (DoS) and intrusion has become prime concern in research and development in the area. The security systems suggested in the past has state recognition problem where the system is not able to accurately identify the actual state of the network nodes due to the absence of clear definition of states of the nodes. In this context, this paper proposes a framework based on Finite State Machine (FSM) for denial of service and intrusion detection in MANETs. In particular, an Interruption Detection system for Adhoc On-demand Distance Vector (ID-AODV) protocol is presented based on finite state machine. The packet dropping and sequence number attacks are closely investigated and detection systems for both types of attacks are designed. The major functional modules of ID-AODV includes network monitoring system, finite state machine and attack detection model. Simulations are carried out in network simulator NS-2 to evaluate the performance of the proposed framework. A comparative evaluation of the performance is also performed with the state-of-the-art techniques: RIDAN and AODV. The performance evaluations attest the benefits of proposed framework in terms of providing better security for denial of service and intrusion detection attacks. PMID:27285146
A machine learning-based framework to identify type 2 diabetes through electronic health records
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2016-01-01
Objective To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. Materials and methods We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. Results We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Discussion Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Conclusions Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. PMID:27919371
A machine learning-based framework to identify type 2 diabetes through electronic health records.
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2017-01-01
To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Y.; Zhang, W. J.
2005-02-01
This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yinan; Shi Handuo; Xiong Zhaoxi
We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified tomore » the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.« less
A State Cyber Hub Operations Framework
2016-06-01
to communicate and sense or interact with their internal states or the external environment. Machine Learning: A type of artificial intelligence that... artificial intelligence , and computational linguistics concerned with the interactions between computers and human (natural) languages. Patching: A piece...formalizing a proof of concept for cyber initiatives and developed frameworks for operationalizing the data and intelligence produced across state
A Biological Signal-Based Stress Monitoring Framework for Children Using Wearable Devices.
Choi, Yerim; Jeon, Yu-Mi; Wang, Lin; Kim, Kwanho
2017-08-23
The safety of children has always been an important issue, and several studies have been conducted to determine the stress state of a child to ensure the safety. Audio signals and biological signals including heart rate are known to be effective for stress state detection. However, collecting those data requires specialized equipment, which is not appropriate for the constant monitoring of children, and advanced data analysis is required for accurate detection. In this regard, we propose a stress state detection framework which utilizes both audio signal and heart rate collected from wearable devices, and adopted machine learning methods for the detection. Experiments using real-world data were conducted to compare detection performances across various machine learning methods and noise levels of audio signal. Adopting the proposed framework in the real-world will contribute to the enhancement of child safety.
Aspect-Oriented Monitoring of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; VanWyk, Eric
2008-01-01
The paper presents current work on extending ASPECTC with state machines, resulting in a framework for aspect-oriented monitoring of C programs. Such a framework can be used for testing purposes, or it can be part of a fault protection strategy. The long term goal is to explore the synergy between the fields of runtime verification, focused on program monitoring, and aspect-oriented programming, focused on more general program development issues. The work is inspired by the observation that most work in this direction has been done for JAVA, partly due to the lack of easily accessible extensible compiler frameworks for C. The work is performed using the SILVER extensible attribute grammar compiler framework, in which C has been defined as a host language. Our work consists of extending C with ASPECTC, and subsequently to extend ASPECTC with state machines.
Modeling Geomagnetic Variations using a Machine Learning Framework
NASA Astrophysics Data System (ADS)
Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.
2017-12-01
We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.
ERIC Educational Resources Information Center
Gorman, Nathan; Parker, Ronald; Lurie, Charles; Maples, Thomas
2005-01-01
Secondary vocational-technical education programs in Mississippi are faced with many challenges resulting from sweeping educational reforms at the national and state levels. Schools and teachers are increasingly being held accountable for providing true learning activities to every student in the classroom. This accountability is measured through…
Metal-organic frameworks with dynamic interlocked components
NASA Astrophysics Data System (ADS)
Vukotic, V. Nicholas; Harris, Kristopher J.; Zhu, Kelong; Schurko, Robert W.; Loeb, Stephen J.
2012-06-01
The dynamics of mechanically interlocked molecules such as rotaxanes and catenanes have been studied in solution as examples of rudimentary molecular switches and machines, but in this medium, the molecules are randomly dispersed and their motion incoherent. As a strategy for achieving a higher level of molecular organization, we have constructed a metal-organic framework material using a [2]rotaxane as the organic linker and binuclear Cu(II) units as the nodes. Activation of the as-synthesized material creates a void space inside the rigid framework that allows the soft macrocyclic ring of the [2]rotaxane to rotate rapidly, unimpeded by neighbouring molecular components. Variable-temperature 13C and 2H solid-state NMR experiments are used to characterize the nature and rate of the dynamic processes occurring inside this unique material. These results provide a blueprint for the future creation of solid-state molecular switches and molecular machines based on mechanically interlocked molecules.
Casey, M
1996-08-15
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.
Classifying Black Hole States with Machine Learning
NASA Astrophysics Data System (ADS)
Huppenkothen, Daniela
2018-01-01
Galactic black hole binaries are known to go through different states with apparent signatures in both X-ray light curves and spectra, leading to important implications for accretion physics as well as our knowledge of General Relativity. Existing frameworks of classification are usually based on human interpretation of low-dimensional representations of the data, and generally only apply to fairly small data sets. Machine learning, in contrast, allows for rapid classification of large, high-dimensional data sets. In this talk, I will report on advances made in classification of states observed in Black Hole X-ray Binaries, focusing on the two sources GRS 1915+105 and Cygnus X-1, and show both the successes and limitations of using machine learning to derive physical constraints on these systems.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Zheng; Ye, Liu
2015-04-01
An efficient scheme is proposed to implement a quantum cloning machine in separate cavities based on a hybrid interaction between electron-spin systems placed in the cavities and an optical coherent pulse. The coefficient of the output state for the present cloning machine is just the direct product of two trigonometric functions, which ensures that different types of quantum cloning machine can be achieved readily in the same framework by appropriately adjusting the rotated angles. The present scheme can implement optimal one-to-two symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, optimal symmetric (asymmetric) real-state cloning, optimal one-to-three symmetric economical real-state cloning, and optimal symmetric cloning of qubits given by an arbitrary axisymmetric distribution. In addition, photon loss of the qubus beams during the transmission and decoherence effects caused by such a photon loss are investigated.
Runtime Verification of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2008-01-01
We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.
Word embeddings quantify 100 years of gender and ethnic stereotypes.
Garg, Nikhil; Schiebinger, Londa; Jurafsky, Dan; Zou, James
2018-04-17
Word embeddings are a powerful machine-learning framework that represents each English word by a vector. The geometric relationship between these vectors captures meaningful semantic relationships between the corresponding words. In this paper, we develop a framework to demonstrate how the temporal dynamics of the embedding helps to quantify changes in stereotypes and attitudes toward women and ethnic minorities in the 20th and 21st centuries in the United States. We integrate word embeddings trained on 100 y of text data with the US Census to show that changes in the embedding track closely with demographic and occupation shifts over time. The embedding captures societal shifts-e.g., the women's movement in the 1960s and Asian immigration into the United States-and also illuminates how specific adjectives and occupations became more closely associated with certain populations over time. Our framework for temporal analysis of word embedding opens up a fruitful intersection between machine learning and quantitative social science.
A general observatory control software framework design for existing small and mid-size telescopes
NASA Astrophysics Data System (ADS)
Ge, Liang; Lu, Xiao-Meng; Jiang, Xiao-Jun
2015-07-01
A general framework for observatory control software would help to improve the efficiency of observation and operation of telescopes, and would also be advantageous for remote and joint observations. We describe a general framework for observatory control software, which considers principles of flexibility and inheritance to meet the expectations from observers and technical personnel. This framework includes observation scheduling, device control and data storage. The design is based on a finite state machine that controls the whole process.
Electrochemically addressable trisradical rotaxanes organized within a metal–organic framework
McGonigal, Paul R.; Deria, Pravas; Hod, Idan; ...
2015-08-17
The organization of trisradical rotaxanes within the channels of a Zr 6-based metal–organic framework (NU-1000) has been achieved postsynthetically by solvent-assisted ligand incorporation. Robust ZrIV–carboxylate bonds are forged between the Zr clusters of NU-1000 and carboxylic acid groups of rotaxane precursors (semirotaxanes) as part of this building block replacement strategy. Ultraviolet–visible–near-infrared (UV-Vis-NIR), electron paramagnetic resonance (EPR), and 1H nuclear magnetic resonance (NMR) spectroscopies all confirm the capture of redox-active rotaxanes within the mesoscale hexagonal channels of NU-1000. Cyclic voltammetry measurements performed on electroactive thin films of the resulting material indicate that redox-active viologen subunits located on the rotaxane components canmore » be accessed electrochemically in the solid state. In contradistinction to previous methods, this strategy for the incorporation of mechanically interlocked molecules within porous materials circumvents the need for de novo synthesis of a metal–organic framework, making it a particularly convenient approach for the design and creation of solid-state molecular switches and machines. In conclusion, the results presented here provide proof-of-concept for the application of postsynthetic transformations in the integration of dynamic molecular machines with robust porous frameworks.« less
Active Learning Using Hint Information.
Li, Chun-Liang; Ferng, Chun-Sung; Lin, Hsuan-Tien
2015-08-01
The abundance of real-world data and limited labeling budget calls for active learning, an important learning paradigm for reducing human labeling efforts. Many recently developed active learning algorithms consider both uncertainty and representativeness when making querying decisions. However, exploiting representativeness with uncertainty concurrently usually requires tackling sophisticated and challenging learning tasks, such as clustering. In this letter, we propose a new active learning framework, called hinted sampling, which takes both uncertainty and representativeness into account in a simpler way. We design a novel active learning algorithm within the hinted sampling framework with an extended support vector machine. Experimental results validate that the novel active learning algorithm can result in a better and more stable performance than that achieved by state-of-the-art algorithms. We also show that the hinted sampling framework allows improving another active learning algorithm designed from the transductive support vector machine.
Application of Machine Learning to Rotorcraft Health Monitoring
NASA Technical Reports Server (NTRS)
Cody, Tyler; Dempsey, Paula J.
2017-01-01
Machine learning is a powerful tool for data exploration and model building with large data sets. This project aimed to use machine learning techniques to explore the inherent structure of data from rotorcraft gear tests, relationships between features and damage states, and to build a system for predicting gear health for future rotorcraft transmission applications. Classical machine learning techniques are difficult, if not irresponsible to apply to time series data because many make the assumption of independence between samples. To overcome this, Hidden Markov Models were used to create a binary classifier for identifying scuffing transitions and Recurrent Neural Networks were used to leverage long distance relationships in predicting discrete damage states. When combined in a workflow, where the binary classifier acted as a filter for the fatigue monitor, the system was able to demonstrate accuracy in damage state prediction and scuffing identification. The time dependent nature of the data restricted data exploration to collecting and analyzing data from the model selection process. The limited amount of available data was unable to give useful information, and the division of training and testing sets tended to heavily influence the scores of the models across combinations of features and hyper-parameters. This work built a framework for tracking scuffing and fatigue on streaming data and demonstrates that machine learning has much to offer rotorcraft health monitoring by using Bayesian learning and deep learning methods to capture the time dependent nature of the data. Suggested future work is to implement the framework developed in this project using a larger variety of data sets to test the generalization capabilities of the models and allow for data exploration.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for the course sequences in the machine tool operation/machine tool and tool and die making technology programs cluster. Presented in the introductory section are a framework of courses and programs, description of the…
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2017-01-01
This study proposes an adaptive-learning-based method for machine faulty detection and health degradation monitoring. The kernel of the proposed method is an "evolving" model that uses an unsupervised online learning scheme, in which an adaptive hidden Markov model (AHMM) is used for online learning the dynamic health changes of machines in their full life. A statistical index is developed for recognizing the new health states in the machines. Those new health states are then described online by adding of new hidden states in AHMM. Furthermore, the health degradations in machines are quantified online by an AHMM-based health index (HI) that measures the similarity between two density distributions that describe the historic and current health states, respectively. When necessary, the proposed method characterizes the distinct operating modes of the machine and can learn online both abrupt as well as gradual health changes. Our method overcomes some drawbacks of the HIs (e.g., relatively low comprehensibility and applicability) based on fixed monitoring models constructed in the offline phase. Results from its application in a bearing life test reveal that the proposed method is effective in online detection and adaptive assessment of machine health degradation. This study provides a useful guide for developing a condition-based maintenance (CBM) system that uses an online learning method without considerable human intervention.
SAINT: A combined simulation language for modeling man-machine systems
NASA Technical Reports Server (NTRS)
Seifert, D. J.
1979-01-01
SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.
Automatic Training of Rat Cyborgs for Navigation.
Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang
2016-01-01
A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs.
Automatic Training of Rat Cyborgs for Navigation
Yu, Yipeng; Wu, Zhaohui; Xu, Kedi; Gong, Yongyue; Zheng, Nenggan; Zheng, Xiaoxiang; Pan, Gang
2016-01-01
A rat cyborg system refers to a biological rat implanted with microelectrodes in its brain, via which the outer electrical stimuli can be delivered into the brain in vivo to control its behaviors. Rat cyborgs have various applications in emergency, such as search and rescue in disasters. Prior to a rat cyborg becoming controllable, a lot of effort is required to train it to adapt to the electrical stimuli. In this paper, we build a vision-based automatic training system for rat cyborgs to replace the time-consuming manual training procedure. A hierarchical framework is proposed to facilitate the colearning between rats and machines. In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral states, and an adaptive adjustment policy is developed to adaptively adjust the stimulation intensity. The experimental results of three rat cyborgs prove the effectiveness of our system. To the best of our knowledge, this study is the first to tackle automatic training of animal cyborgs. PMID:27436999
Hello World Deep Learning in Medical Imaging.
Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George
2018-05-03
There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.
NASA Technical Reports Server (NTRS)
Fields, Chris
1989-01-01
Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a continuous dynamical system with countably many quasistable states has at least the computational power of a universal Turing machine. Such an analysis assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.
NASA Technical Reports Server (NTRS)
Fields, Chris
1989-01-01
Continuous dynamical systems intuitively seem capable of more complex behavior than discrete systems. If analyzed in the framework of the traditional theory of computation, a continuous dynamical system with countablely many quasistable states has at least the computational power of a universal Turing machine. Such an analyses assumes, however, the classical notion of measurement. If measurement is viewed nonclassically, a continuous dynamical system cannot, even in principle, exhibit behavior that cannot be simulated by a universal Turing machine.
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
NASA Astrophysics Data System (ADS)
Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.
2016-09-01
In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.
State Event Models for the Formal Analysis of Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles
2014-01-01
The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
NASA Astrophysics Data System (ADS)
Vogelsberg, Cortnie Sue
Amphidynamic crystals are an extremely promising platform for the development of artificial molecular machines and stimuli-responsive materials. In analogy to skeletal muscle, their function will rely upon the collective operation of many densely packed molecular machines (i.e. actin-bound myosin) that are self-assembled in a highly organized anisotropic medium. By choosing lattice-forming elements and moving "parts" with specific functionalities, individual molecular machines may be synthesized and self-assembled in order to carry out desirable functions. In recent years, efforts in the design of amphidynamic materials based on molecular gyroscopes and compasses have shown that a certain amount of free volume is essential to facilitate internal rotation and reorientation within a crystal. In order to further establish structure/function relationships to advance the development of increasingly complex molecular machinery, molecular rotors and a molecular "spinning" top were synthesized and incorporated into a variety of solid-state architectures with different degrees of periodicity, dimensionality, and free volume. Specifically, lamellar molecular crystals, hierarchically ordered periodic mesoporous organosilicas, and metal-organic frameworks were targeted for the development of solid-state molecular machines. Using an array of solid-state nuclear magnetic resonance spectroscopy techniques, the dynamic properties of these novel molecular machine assemblies were determined and correlated with their corresponding structural features. It was found that architecture type has a profound influence on functional dynamics. The study of layered molecular crystals, composed of either molecular rotors or "spinning" tops, probed functional dynamics within dense, highly organized environments. From their study, it was discovered that: 1) crystallographically distinct sites may be utilized to differentiate machine function, 2) halogen bonding interactions are sufficiently strong to direct an assembly of molecular machines, 3) the relative flexibility of the crystal environment proximate to a dynamic component may have a significant effect on its function, and, 4) molecular machines, which possess both solid-state photochemical reactivity and dynamics may show complex reaction kinetics if the correlation time of the dynamic process and the lifetime of the excited state occur on the same time scale and the dynamic moiety inherently participates as a reaction intermediate. The study of periodic mesoporous organosilica with hierarchical order probed molecular dynamics within 2D layers of molecular rotors, organized in only one dimension and with ca. 50% exposed to the mesopore free volume. From their study, it was discovered that: 1) molecular rotors, which comprise the layers of the mesopore walls, form a 2D rotational glass, 2) rotator dynamics within the 2D rotational glass undergo a transition to a 2D rotational fluid, and, 3) a 2D rotational glass transition may be exploited to develop hyper-sensitive thermally activated molecular machines. The study of a metal-organic framework assembled from molecular rotors probed dynamics in a periodic three-dimensional free-volume environment, without the presence of close contacts. From the study of this solid-state material, it was determined that: 1) the intrinsic electronic barrier is one of the few factors, which may affect functional dynamics in a true free-volume environment, and, 2) molecular machines with dynamic barriers <
A system framework of inter-enterprise machining quality control based on fractal theory
NASA Astrophysics Data System (ADS)
Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng
2014-03-01
In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.
Machine Learning-based discovery of closures for reduced models of dynamical systems
NASA Astrophysics Data System (ADS)
Pan, Shaowu; Duraisamy, Karthik
2017-11-01
Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.
2011-03-01
functions of the vignette editor include visualizing the state of the UAS team, creating T&E scenarios, monitoring the UAS team performance, and...These behaviors are then executed by the robot sequentially (Figure 2). A state machine mission editor allows mission builders to use behaviors from the...include control, robotics, distributed applications, multimedia applications, databases, design patterns, and software engineering. Mr. Lenzi is the
ERIC Educational Resources Information Center
Mu, Jin; Stegmann, Karsten; Mayfield, Elijah; Rose, Carolyn; Fischer, Frank
2012-01-01
Research related to online discussions frequently faces the problem of analyzing huge corpora. Natural Language Processing (NLP) technologies may allow automating this analysis. However, the state-of-the-art in machine learning and text mining approaches yields models that do not transfer well between corpora related to different topics. Also,…
Nonlinear machine learning and design of reconfigurable digital colloids.
Long, Andrew W; Phillips, Carolyn L; Jankowksi, Eric; Ferguson, Andrew L
2016-09-14
Digital colloids, a cluster of freely rotating "halo" particles tethered to the surface of a central particle, were recently proposed as ultra-high density memory elements for information storage. Rational design of these digital colloids for memory storage applications requires a quantitative understanding of the thermodynamic and kinetic stability of the configurational states within which information is stored. We apply nonlinear machine learning to Brownian dynamics simulations of these digital colloids to extract the low-dimensional intrinsic manifold governing digital colloid morphology, thermodynamics, and kinetics. By modulating the relative size ratio between halo particles and central particles, we investigate the size-dependent configurational stability and transition kinetics for the 2-state tetrahedral (N = 4) and 30-state octahedral (N = 6) digital colloids. We demonstrate the use of this framework to guide the rational design of a memory storage element to hold a block of text that trades off the competing design criteria of memory addressability and volatility.
ControlShell: A real-time software framework
NASA Technical Reports Server (NTRS)
Schneider, Stanley A.; Chen, Vincent W.; Pardo-Castellote, Gerardo
1994-01-01
The ControlShell system is a programming environment that enables the development and implementation of complex real-time software. It includes many building tools for complex systems, such as a graphical finite state machine (FSM) tool to provide strategic control. ControlShell has a component-based design, providing interface definitions and mechanisms for building real-time code modules along with providing basic data management. Some of the system-building tools incorporated in ControlShell are a graphical data flow editor, a component data requirement editor, and a state-machine editor. It also includes a distributed data flow package, an execution configuration manager, a matrix package, and an object database and dynamic binding facility. This paper presents an overview of ControlShell's architecture and examines the functions of several of its tools.
How do robots take two parts apart
NASA Technical Reports Server (NTRS)
Bajcsy, Ruzena K.; Tsikos, Constantine J.
1989-01-01
This research is a natural progression of efforts which begun with the introduction of a new research paradigm in machine perception, called Active Perception. There it was stated that Active Perception is a problem of intelligent control strategies applied to data acquisition processes which will depend on the current state of the data interpretation, including recognition. The disassembly/assembly problem is treated as an Active Perception problem, and a method for autonomous disassembly based on this framework is presented.
Automatic speech recognition using a predictive echo state network classifier.
Skowronski, Mark D; Harris, John G
2007-04-01
We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.
Design Methodology for Automated Construction Machines
1987-12-11
along with the design of a pair of machines which automate framework installation.-,, 20. DISTRIBUTION IAVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY... Development Assistant Professor of Civil Engineering and Laura A . Demsetz, David H. Levy, Bruce Schena Graduate Research Assistants December 11, 1987 U.S...are discussed along with the design of a pair of machines which automate framework installation. Preliminary analysis and testing indicate that these
Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction
Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon
2016-01-01
Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
EEG-based emotion recognition in music listening.
Lin, Yuan-Pin; Wang, Chi-Hong; Jung, Tzyy-Ping; Wu, Tien-Lin; Jeng, Shyh-Kang; Duann, Jeng-Ren; Chen, Jyh-Horng
2010-07-01
Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.
Predicting disease progression from short biomarker series using expert advice algorithm
NASA Astrophysics Data System (ADS)
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-01
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of ``prediction with expert advice'' to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.
Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng
2016-05-01
In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.
Predicting disease progression from short biomarker series using expert advice algorithm.
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-20
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of "prediction with expert advice" to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
2017-04-26
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
NASA Astrophysics Data System (ADS)
Salehi, Hadi; Das, Saptarshi; Chakrabartty, Shantanu; Biswas, Subir; Burgueño, Rigoberto
2017-04-01
This study proposes a novel strategy for damage identification in aircraft structures. The strategy was evaluated based on the simulation of the binary data generated from self-powered wireless sensors employing a pulse switching architecture. The energy-aware pulse switching communication protocol uses single pulses instead of multi-bit packets for information delivery resulting in discrete binary data. A system employing this energy-efficient technology requires dealing with time-delayed binary data due to the management of power budgets for sensing and communication. This paper presents an intelligent machine-learning framework based on combination of the low-rank matrix decomposition and pattern recognition (PR) methods. Further, data fusion is employed as part of the machine-learning framework to take into account the effect of data time delay on its interpretation. Simulated time-delayed binary data from self-powered sensors was used to determine damage indicator variables. Performance and accuracy of the damage detection strategy was examined and tested for the case of an aircraft horizontal stabilizer. Damage states were simulated on a finite element model by reducing stiffness in a region of the stabilizer's skin. The proposed strategy shows satisfactory performance to identify the presence and location of the damage, even with noisy and incomplete data. It is concluded that PR is a promising machine-learning algorithm for damage detection for time-delayed binary data from novel self-powered wireless sensors.
Tackling the x-ray cargo inspection challenge using machine learning
NASA Astrophysics Data System (ADS)
Jaccard, Nicolas; Rogers, Thomas W.; Morton, Edward J.; Griffin, Lewis D.
2016-05-01
The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding com-merce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection workflow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
A regularization approach to hydrofacies delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlberg, Brendt; Tartakovsky, Daniel
2009-01-01
We consider an inverse problem of identifying complex internal structures of composite (geological) materials from sparse measurements of system parameters and system states. Two conceptual frameworks for identifying internal boundaries between constitutive materials in a composite are considered. A sequential approach relies on support vector machines, nearest neighbor classifiers, or geostatistics to reconstruct boundaries from measurements of system parameters and then uses system states data to refine the reconstruction. A joint approach inverts the two data sets simultaneously by employing a regularization approach.
Language Acquisition and Machine Learning.
1986-02-01
machine learning and examine its implications for computational models of language acquisition. As a framework for understanding this research, the authors propose four component tasks involved in learning from experience-aggregation, clustering, characterization, and storage. They then consider four common problems studied by machine learning researchers-learning from examples, heuristics learning, conceptual clustering, and learning macro-operators-describing each in terms of our framework. After this, they turn to the problem of grammar
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
Machine-learned and codified synthesis parameters of oxide materials
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Tomala, Alex; Matthews, Sara; Strubell, Emma; Saunders, Adam; McCallum, Andrew; Olivetti, Elsa
2017-09-01
Predictive materials design has rapidly accelerated in recent years with the advent of large-scale resources, such as materials structure and property databases generated by ab initio computations. In the absence of analogous ab initio frameworks for materials synthesis, high-throughput and machine learning techniques have recently been harnessed to generate synthesis strategies for select materials of interest. Still, a community-accessible, autonomously-compiled synthesis planning resource which spans across materials systems has not yet been developed. In this work, we present a collection of aggregated synthesis parameters computed using the text contained within over 640,000 journal articles using state-of-the-art natural language processing and machine learning techniques. We provide a dataset of synthesis parameters, compiled autonomously across 30 different oxide systems, in a format optimized for planning novel syntheses of materials.
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2015-12-01
Prognostics is much efficient to achieve zero-downtime performance, maximum productivity and proactive maintenance of machines. Prognostics intends to assess and predict the time evolution of machine health degradation so that machine failures can be predicted and prevented. A novel prognostics system is developed based on the data-model-fusion scheme using the Bayesian inference-based self-organizing map (SOM) and an integration of logistic regression (LR) and high-order particle filtering (HOPF). In this prognostics system, a baseline SOM is constructed to model the data distribution space of healthy machine under an assumption that predictable fault patterns are not available. Bayesian inference-based probability (BIP) derived from the baseline SOM is developed as a quantification indication of machine health degradation. BIP is capable of offering failure probability for the monitored machine, which has intuitionist explanation related to health degradation state. Based on those historic BIPs, the constructed LR and its modeling noise constitute a high-order Markov process (HOMP) to describe machine health propagation. HOPF is used to solve the HOMP estimation to predict the evolution of the machine health in the form of a probability density function (PDF). An on-line model update scheme is developed to adapt the Markov process changes to machine health dynamics quickly. The experimental results on a bearing test-bed illustrate the potential applications of the proposed system as an effective and simple tool for machine health prognostics.
ClearTK 2.0: Design Patterns for Machine Learning in UIMA
Bethard, Steven; Ogren, Philip; Becker, Lee
2014-01-01
ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework. PMID:29104966
ClearTK 2.0: Design Patterns for Machine Learning in UIMA.
Bethard, Steven; Ogren, Philip; Becker, Lee
2014-05-01
ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework.
Developing and Validating Practical Eye Metrics for the Sense-Assess-Augment Framework
2015-09-29
Sense-Assess-Augment ( SAA ) Framework. To better close the loop between the human and machine teammates AFRL’s Human Performance Wing and Human...Sense-Assess-Augment ( SAA ) framework, which is designed to sense a suite of physiological signals from the operator, use these signals to assess the...to use psychophysiological measures to improve human-machine teamwork (such as Biocybernetics or Augmented Cognition) the AFRL- SAA research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Decision Manifold Approximation for Physics-Based Simulations
NASA Technical Reports Server (NTRS)
Wong, Jay Ming; Samareh, Jamshid A.
2016-01-01
With the recent surge of success in big-data driven deep learning problems, many of these frameworks focus on the notion of architecture design and utilizing massive databases. However, in some scenarios massive sets of data may be difficult, and in some cases infeasible, to acquire. In this paper we discuss a trajectory-based framework that quickly learns the underlying decision manifold of binary simulation classifications while judiciously selecting exploratory target states to minimize the number of required simulations. Furthermore, we draw particular attention to the simulation prediction application idealized to the case where failures in simulations can be predicted and avoided, providing machine intelligence to novice analysts. We demonstrate this framework in various forms of simulations and discuss its efficacy.
Machinability of IPS Empress 2 framework ceramic.
Schmidt, C; Weigl, P
2000-01-01
Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.
Ikushima, Koujiro; Arimura, Hidetaka; Jin, Ze; Yabu-Uchi, Hidetake; Kuwazuru, Jumpei; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki
2017-01-01
We have proposed a computer-assisted framework for machine-learning-based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the 'degree of GTV' for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
The value of prior knowledge in machine learning of complex network systems.
Ferranti, Dana; Krane, David; Craft, David
2017-11-15
Our overall goal is to develop machine-learning approaches based on genomics and other relevant accessible information for use in predicting how a patient will respond to a given proposed drug or treatment. Given the complexity of this problem, we begin by developing, testing and analyzing learning methods using data from simulated systems, which allows us access to a known ground truth. We examine the benefits of using prior system knowledge and investigate how learning accuracy depends on various system parameters as well as the amount of training data available. The simulations are based on Boolean networks-directed graphs with 0/1 node states and logical node update rules-which are the simplest computational systems that can mimic the dynamic behavior of cellular systems. Boolean networks can be generated and simulated at scale, have complex yet cyclical dynamics and as such provide a useful framework for developing machine-learning algorithms for modular and hierarchical networks such as biological systems in general and cancer in particular. We demonstrate that utilizing prior knowledge (in the form of network connectivity information), without detailed state equations, greatly increases the power of machine-learning algorithms to predict network steady-state node values ('phenotypes') and perturbation responses ('drug effects'). Links to codes and datasets here: https://gray.mgh.harvard.edu/people-directory/71-david-craft-phd. dcraft@broadinstitute.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol; Jang, Huisu
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction.
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction. PMID:29136004
Quantum ensembles of quantum classifiers.
Schuld, Maria; Petruccione, Francesco
2018-02-09
Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which reflects Mississippi's statutory requirement that instructional programs be based on core curricula and performance-based assessment, contains outlines of the instructional units required in local instructional management plans and daily lesson plans for machine tool operation/machine shop I and II. Presented first are a…
A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-01-01
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Practical Framework: Implementing OEE Method in Manufacturing Process Environment
NASA Astrophysics Data System (ADS)
Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.
2016-02-01
Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.
Mimicking Nonequilibrium Steady States with Time-Periodic Driving
NASA Astrophysics Data System (ADS)
Raz, O.; Subaşı, Y.; Jarzynski, C.
2016-04-01
Under static conditions, a system satisfying detailed balance generically relaxes to an equilibrium state in which there are no currents. To generate persistent currents, either detailed balance must be broken or the system must be driven in a time-dependent manner. A stationary system that violates detailed balance evolves to a nonequilibrium steady state (NESS) characterized by fixed currents. Conversely, a system that satisfies instantaneous detailed balance but is driven by the time-periodic variation of external parameters—also known as a stochastic pump (SP)—reaches a periodic state with nonvanishing currents. In both cases, these currents are maintained at the cost of entropy production. Are these two paradigmatic scenarios effectively equivalent? For discrete-state systems, we establish a mapping between nonequilibrium stationary states and stochastic pumps. Given a NESS characterized by a particular set of stationary probabilities, currents, and entropy production rates, we show how to construct a SP with exactly the same (time-averaged) values. The mapping works in the opposite direction as well. These results establish a proof of principle: They show that stochastic pumps are able to mimic the behavior of nonequilibrium steady states, and vice versa, within the theoretical framework of discrete-state stochastic thermodynamics. Nonequilibrium steady states and stochastic pumps are often used to model, respectively, biomolecular motors driven by chemical reactions and artificial molecular machines steered by the variation of external, macroscopic parameters. Our results loosely suggest that anything a biomolecular machine can do, an artificial molecular machine can do equally well. We illustrate this principle by showing that kinetic proofreading, a NESS mechanism that explains the low error rates in biochemical reactions, can be effectively mimicked by a constrained periodic driving.
The Security of Machine Learning
2008-04-24
Machine learning has become a fundamental tool for computer security, since it can rapidly evolve to changing and complex situations. That...adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine ...We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing
Application of Fuzzy TOPSIS for evaluating machining techniques using sustainability metrics
NASA Astrophysics Data System (ADS)
Digalwar, Abhijeet K.
2018-04-01
Sustainable processes and techniques are getting increased attention over the last few decades due to rising concerns over the environment, improved focus on productivity and stringency in environmental as well as occupational health and safety norms. The present work analyzes the research on sustainable machining techniques and identifies techniques and parameters on which sustainability of a process is evaluated. Based on the analysis these parameters are then adopted as criteria’s to evaluate different sustainable machining techniques such as Cryogenic Machining, Dry Machining, Minimum Quantity Lubrication (MQL) and High Pressure Jet Assisted Machining (HPJAM) using a fuzzy TOPSIS framework. In order to facilitate easy arithmetic, the linguistic variables represented by fuzzy numbers are transformed into crisp numbers based on graded mean representation. Cryogenic machining was found to be the best alternative sustainable technique as per the fuzzy TOPSIS framework adopted. The paper provides a method to deal with multi criteria decision making problems in a complex and linguistic environment.
Design of reinforcement welding machine within steel framework for marine engineering
NASA Astrophysics Data System (ADS)
Wang, Gang; Wu, Jin
2017-04-01
In this project, a design scheme that reinforcement welding machine is added within the steel framework is proposed according to the double-side welding technology for box-beam structure in marine engineering. Then the design and development of circuit and transmission mechanism for new welding equipment are completed as well with one sample machine being made. Moreover, the trial running is finished finally. Main technical parameters of the equipment are: the working stroke: ≥1500mm, the welding speed: 8˜15cm/min and the welding sheet thickness: ≥20mm.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Michaud, Claude; Baudier, François
2007-01-01
Conflicts of interest between the food industry and public decision makers have increasingly multiplied over the last few years, especially within the context of implementing the French National Nutrition Programme. This paper describes the rhetoric and the strategies developed by the private sector in order to counter the law's implementation and enforcement based on a concrete example, namely, the removal of vending machines from schools. After having evoked possibilities of developing new partnerships as suggested by national and international health authorities, it reaffirms the right and the duty of the State to regulate within the framework of a health promotion policy, an approach which integrates the necessary open democratic public debate between the different sectors.
A Symbiotic Framework for coupling Machine Learning and Geosciences in Prediction and Predictability
NASA Astrophysics Data System (ADS)
Ravela, S.
2017-12-01
In this presentation we review the two directions of a symbiotic relationship between machine learning and the geosciences in relation to prediction and predictability. In the first direction, we develop ensemble, information theoretic and manifold learning framework to adaptively improve state and parameter estimates in nonlinear high-dimensional non-Gaussian problems, showing in particular that tractable variational approaches can be produced. We demonstrate these applications in the context of autonomous mapping of environmental coherent structures and other idealized problems. In the reverse direction, we show that data assimilation, particularly probabilistic approaches for filtering and smoothing offer a novel and useful way to train neural networks, and serve as a better basis than gradient based approaches when we must quantify uncertainty in association with nonlinear, chaotic processes. In many inference problems in geosciences we seek to build reduced models to characterize local sensitivies, adjoints or other mechanisms that propagate innovations and errors. Here, the particular use of neural approaches for such propagation trained using ensemble data assimilation provides a novel framework. Through these two examples of inference problems in the earth sciences, we show that not only is learning useful to broaden existing methodology, but in reverse, geophysical methodology can be used to influence paradigms in learning.
Abstract quantum computing machines and quantum computational logics
NASA Astrophysics Data System (ADS)
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
A multitasking finite state architecture for computer control of an electric powertrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burba, J.C.
1984-01-01
Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less
Machine listening intelligence
NASA Astrophysics Data System (ADS)
Cella, C. E.
2017-05-01
This manifesto paper will introduce machine listening intelligence, an integrated research framework for acoustic and musical signals modelling, based on signal processing, deep learning and computational musicology.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Heterogeneous data fusion for brain tumor classification.
Metsis, Vangelis; Huang, Heng; Andronesi, Ovidiu C; Makedon, Fillia; Tzika, Aria
2012-10-01
Current research in biomedical informatics involves analysis of multiple heterogeneous data sets. This includes patient demographics, clinical and pathology data, treatment history, patient outcomes as well as gene expression, DNA sequences and other information sources such as gene ontology. Analysis of these data sets could lead to better disease diagnosis, prognosis, treatment and drug discovery. In this report, we present a novel machine learning framework for brain tumor classification based on heterogeneous data fusion of metabolic and molecular datasets, including state-of-the-art high-resolution magic angle spinning (HRMAS) proton (1H) magnetic resonance spectroscopy and gene transcriptome profiling, obtained from intact brain tumor biopsies. Our experimental results show that our novel framework outperforms any analysis using individual dataset.
A New Mathematical Framework for Design Under Uncertainty
2016-05-05
blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and
NASA Astrophysics Data System (ADS)
Jafarzadegan, K.; Merwade, V.; Saksena, S.
2017-12-01
Using conventional hydrodynamic methods for floodplain mapping in large-scale and data-scarce regions is problematic due to the high cost of these methods, lack of reliable data and uncertainty propagation. In this study a new framework is proposed to generate 100-year floodplains for any gauged or ungauged watershed across the United States (U.S.). This framework uses Flood Insurance Rate Maps (FIRMs), topographic, climatic and land use data which are freely available for entire U.S. for floodplain mapping. The framework consists of three components, including a Random Forest classifier for watershed classification, a Probabilistic Threshold Binary Classifier (PTBC) for generating the floodplains, and a lookup table for linking the Random Forest classifier to the PTBC. The effectiveness and reliability of the proposed framework is tested on 145 watersheds from various geographical locations in the U.S. The validation results show that around 80 percent of total watersheds are predicted well, 14 percent have acceptable fit and less than five percent are predicted poorly compared to FIRMs. Another advantage of this framework is its ability in generating floodplains for all small rivers and tributaries. Due to the high accuracy and efficiency of this framework, it can be used as a preliminary decision making tool to generate 100-year floodplain maps for data-scarce regions and all tributaries where hydrodynamic methods are difficult to use.
Erraguntla, Madhav; Zapletal, Josef; Lawley, Mark
2017-12-01
The impact of infectious disease on human populations is a function of many factors including environmental conditions, vector dynamics, transmission mechanics, social and cultural behaviors, and public policy. A comprehensive framework for disease management must fully connect the complete disease lifecycle, including emergence from reservoir populations, zoonotic vector transmission, and impact on human societies. The Framework for Infectious Disease Analysis is a software environment and conceptual architecture for data integration, situational awareness, visualization, prediction, and intervention assessment. Framework for Infectious Disease Analysis automatically collects biosurveillance data using natural language processing, integrates structured and unstructured data from multiple sources, applies advanced machine learning, and uses multi-modeling for analyzing disease dynamics and testing interventions in complex, heterogeneous populations. In the illustrative case studies, natural language processing from social media, news feeds, and websites was used for information extraction, biosurveillance, and situation awareness. Classification machine learning algorithms (support vector machines, random forests, and boosting) were used for disease predictions.
Robust Fault Diagnosis in Electric Drives Using Machine Learning
2004-09-08
detection of fault conditions of the inverter. A machine learning framework is developed to systematically select torque-speed domain operation points...were used to generate various fault condition data for machine learning . The technique is viable for accurate, reliable and fast fault detection in electric drives.
Jiang, Xiaoqian; Aziz, Md Momin Al; Wang, Shuang; Mohammed, Noman
2018-01-01
Background Machine learning is an effective data-driven tool that is being widely used to extract valuable patterns and insights from data. Specifically, predictive machine learning models are very important in health care for clinical data analysis. The machine learning algorithms that generate predictive models often require pooling data from different sources to discover statistical patterns or correlations among different attributes of the input data. The primary challenge is to fulfill one major objective: preserving the privacy of individuals while discovering knowledge from data. Objective Our objective was to develop a hybrid cryptographic framework for performing regression analysis over distributed data in a secure and efficient way. Methods Existing secure computation schemes are not suitable for processing the large-scale data that are used in cutting-edge machine learning applications. We designed, developed, and evaluated a hybrid cryptographic framework, which can securely perform regression analysis, a fundamental machine learning algorithm using somewhat homomorphic encryption and a newly introduced secure hardware component of Intel Software Guard Extensions (Intel SGX) to ensure both privacy and efficiency at the same time. Results Experimental results demonstrate that our proposed method provides a better trade-off in terms of security and efficiency than solely secure hardware-based methods. Besides, there is no approximation error. Computed model parameters are exactly similar to plaintext results. Conclusions To the best of our knowledge, this kind of secure computation model using a hybrid cryptographic framework, which leverages both somewhat homomorphic encryption and Intel SGX, is not proposed or evaluated to this date. Our proposed framework ensures data security and computational efficiency at the same time. PMID:29506966
Sadat, Md Nazmus; Jiang, Xiaoqian; Aziz, Md Momin Al; Wang, Shuang; Mohammed, Noman
2018-03-05
Machine learning is an effective data-driven tool that is being widely used to extract valuable patterns and insights from data. Specifically, predictive machine learning models are very important in health care for clinical data analysis. The machine learning algorithms that generate predictive models often require pooling data from different sources to discover statistical patterns or correlations among different attributes of the input data. The primary challenge is to fulfill one major objective: preserving the privacy of individuals while discovering knowledge from data. Our objective was to develop a hybrid cryptographic framework for performing regression analysis over distributed data in a secure and efficient way. Existing secure computation schemes are not suitable for processing the large-scale data that are used in cutting-edge machine learning applications. We designed, developed, and evaluated a hybrid cryptographic framework, which can securely perform regression analysis, a fundamental machine learning algorithm using somewhat homomorphic encryption and a newly introduced secure hardware component of Intel Software Guard Extensions (Intel SGX) to ensure both privacy and efficiency at the same time. Experimental results demonstrate that our proposed method provides a better trade-off in terms of security and efficiency than solely secure hardware-based methods. Besides, there is no approximation error. Computed model parameters are exactly similar to plaintext results. To the best of our knowledge, this kind of secure computation model using a hybrid cryptographic framework, which leverages both somewhat homomorphic encryption and Intel SGX, is not proposed or evaluated to this date. Our proposed framework ensures data security and computational efficiency at the same time. ©Md Nazmus Sadat, Xiaoqian Jiang, Md Momin Al Aziz, Shuang Wang, Noman Mohammed. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 05.03.2018.
Matsunaga, Yasuhiro
2018-01-01
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137
Matsunaga, Yasuhiro; Sugita, Yuji
2018-05-03
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.
MachineProse: an Ontological Framework for Scientific Assertions
Dinakarpandian, Deendayal; Lee, Yugyung; Vishwanath, Kartik; Lingambhotla, Rohini
2006-01-01
Objective: The idea of testing a hypothesis is central to the practice of biomedical research. However, the results of testing a hypothesis are published mainly in the form of prose articles. Encoding the results as scientific assertions that are both human and machine readable would greatly enhance the synergistic growth and dissemination of knowledge. Design: We have developed MachineProse (MP), an ontological framework for the concise specification of scientific assertions. MP is based on the idea of an assertion constituting a fundamental unit of knowledge. This is in contrast to current approaches that use discrete concept terms from domain ontologies for annotation and assertions are only inferred heuristically. Measurements: We use illustrative examples to highlight the advantages of MP over the use of the Medical Subject Headings (MeSH) system and keywords in indexing scientific articles. Results: We show how MP makes it possible to carry out semantic annotation of publications that is machine readable and allows for precise search capabilities. In addition, when used by itself, MP serves as a knowledge repository for emerging discoveries. A prototype for proof of concept has been developed that demonstrates the feasibility and novel benefits of MP. As part of the MP framework, we have created an ontology of relationship types with about 100 terms optimized for the representation of scientific assertions. Conclusion: MachineProse is a novel semantic framework that we believe may be used to summarize research findings, annotate biomedical publications, and support sophisticated searches. PMID:16357355
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Zhang, Jian-Hua; Xia, Jia-Jun; Garibaldi, Jonathan M; Groumpos, Petros P; Wang, Ru-Bin
2017-06-01
In human-machine (HM) hybrid control systems, human operator and machine cooperate to achieve the control objectives. To enhance the overall HM system performance, the discrete manual control task-load by the operator must be dynamically allocated in accordance with continuous-time fluctuation of psychophysiological functional status of the operator, so-called operator functional state (OFS). The behavior of the HM system is hybrid in nature due to the co-existence of discrete task-load (control) variable and continuous operator performance (system output) variable. Petri net is an effective tool for modeling discrete event systems, but for hybrid system involving discrete dynamics, generally Petri net model has to be extended. Instead of using different tools to represent continuous and discrete components of a hybrid system, this paper proposed a method of fuzzy inference Petri nets (FIPN) to represent the HM hybrid system comprising a Mamdani-type fuzzy model of OFS and a logical switching controller in a unified framework, in which the task-load level is dynamically reallocated between the operator and machine based on the model-predicted OFS. Furthermore, this paper used a multi-model approach to predict the operator performance based on three electroencephalographic (EEG) input variables (features) via the Wang-Mendel (WM) fuzzy modeling method. The membership function parameters of fuzzy OFS model for each experimental participant were optimized using artificial bee colony (ABC) evolutionary algorithm. Three performance indices, RMSE, MRE, and EPR, were computed to evaluate the overall modeling accuracy. Experiment data from six participants are analyzed. The results show that the proposed method (FIPN with adaptive task allocation) yields lower breakdown rate (from 14.8% to 3.27%) and higher human performance (from 90.30% to 91.99%). The simulation results of the FIPN-based adaptive HM (AHM) system on six experimental participants demonstrate that the FIPN framework provides an effective way to model and regulate/optimize the OFS in HM hybrid systems composed of continuous-time OFS model and discrete-event switching controller. Copyright © 2017 Elsevier B.V. All rights reserved.
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
Applications of Machine Learning for Radiation Therapy.
Arimura, Hidetaka; Nakamoto, Takahiro
2016-01-01
Radiation therapy has been highly advanced as image guided radiation therapy (IGRT) by making advantage of image engineering technologies. Recently, novel frameworks based on image engineering technologies as well as machine learning technologies have been studied for sophisticating the radiation therapy. In this review paper, the author introduces several researches of applications of machine learning for radiation therapy. For examples, a method to determine the threshold values for standardized uptake value (SUV) for estimation of gross tumor volume (GTV) in positron emission tomography (PET) images, an approach to estimate the multileaf collimator (MLC) position errors between treatment plans and radiation delivery time, and prediction frameworks for esophageal stenosis and radiation pneumonitis risk after radiation therapy are described. Finally, the author introduces seven issues that one should consider when applying machine learning models to radiation therapy.
An Evolutionary Machine Learning Framework for Big Data Sequence Mining
ERIC Educational Resources Information Center
Kamath, Uday Krishna
2014-01-01
Sequence classification is an important problem in many real-world applications. Unlike other machine learning data, there are no "explicit" features or signals in sequence data that can help traditional machine learning algorithms learn and predict from the data. Sequence data exhibits inter-relationships in the elements that are…
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2012-01-05
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Perdomo-Ortiz, Alejandro
2018-07-01
Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the quantum-assisted Helmholtz machine:a hybrid quantum–classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16 × 16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.
A framework for porting the NeuroBayes machine learning algorithm to FPGAs
NASA Astrophysics Data System (ADS)
Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.
2016-01-01
The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.
Vulnerability detection using data-flow graphs and SMT solvers
2016-10-31
concerns. The framework is modular and pipelined to allow scalable analysis on distributed systems. Our vulnerability detection framework employs machine...Design We designed the framework to be modular to enable flexible reuse and extendibility. In its current form, our framework performs the following
Ensemble positive unlabeled learning for disease gene identification.
Yang, Peng; Li, Xiaoli; Chua, Hon-Nian; Kwoh, Chee-Keong; Ng, See-Kiong
2014-01-01
An increasing number of genes have been experimentally confirmed in recent years as causative genes to various human diseases. The newly available knowledge can be exploited by machine learning methods to discover additional unknown genes that are likely to be associated with diseases. In particular, positive unlabeled learning (PU learning) methods, which require only a positive training set P (confirmed disease genes) and an unlabeled set U (the unknown candidate genes) instead of a negative training set N, have been shown to be effective in uncovering new disease genes in the current scenario. Using only a single source of data for prediction can be susceptible to bias due to incompleteness and noise in the genomic data and a single machine learning predictor prone to bias caused by inherent limitations of individual methods. In this paper, we propose an effective PU learning framework that integrates multiple biological data sources and an ensemble of powerful machine learning classifiers for disease gene identification. Our proposed method integrates data from multiple biological sources for training PU learning classifiers. A novel ensemble-based PU learning method EPU is then used to integrate multiple PU learning classifiers to achieve accurate and robust disease gene predictions. Our evaluation experiments across six disease groups showed that EPU achieved significantly better results compared with various state-of-the-art prediction methods as well as ensemble learning classifiers. Through integrating multiple biological data sources for training and the outputs of an ensemble of PU learning classifiers for prediction, we are able to minimize the potential bias and errors in individual data sources and machine learning algorithms to achieve more accurate and robust disease gene predictions. In the future, our EPU method provides an effective framework to integrate the additional biological and computational resources for better disease gene predictions.
Bidirectional RNN for Medical Event Detection in Electronic Health Records.
Jagannatha, Abhyuday N; Yu, Hong
2016-06-01
Sequence labeling for extraction of medical events and their attributes from unstructured text in Electronic Health Record (EHR) notes is a key step towards semantic understanding of EHRs. It has important applications in health informatics including pharmacovigilance and drug surveillance. The state of the art supervised machine learning models in this domain are based on Conditional Random Fields (CRFs) with features calculated from fixed context windows. In this application, we explored recurrent neural network frameworks and show that they significantly out-performed the CRF models.
1986-08-01
Craik and Lockhart describes the various levels of information processing involved in memory (8). The preliminary level ...A prevalent malfunction. Anesthesia and Analg~esia, 1985, 64: 745-747. 8. Craik , F.I.M., Lockhart , R.S. Levels of processing : A framework for memory...research. Journal of Verbal Learning and Behavior, 1972, 11: 671-684. 9. Craik , F.I.M., Lockhart , R.S. Levels of processing : A * framework for
NASA Astrophysics Data System (ADS)
Nguyen, Minh Q.; Allebach, Jan P.
2015-01-01
In our previous work1 , we presented a block-based technique to analyze printed page uniformity both visually and metrically. The features learned from the models were then employed in a Support Vector Machine (SVM) framework to classify the pages into one of the two categories of acceptable and unacceptable quality. In this paper, we introduce a set of tools for machine learning in the assessment of printed page uniformity. This work is primarily targeted to the printing industry, specifically the ubiquitous laser, electrophotographic printer. We use features that are well-correlated with the rankings of expert observers to develop a novel machine learning framework that allows one to achieve the minimum "false alarm" rate, subject to a chosen "miss" rate. Surprisingly, most of the research that has been conducted on machine learning does not consider this framework. During the process of developing a new product, test engineers will print hundreds of test pages, which can be scanned and then analyzed by an autonomous algorithm. Among these pages, most may be of acceptable quality. The objective is to find the ones that are not. These will provide critically important information to systems designers, regarding issues that need to be addressed in improving the printer design. A "miss" is defined to be a page that is not of acceptable quality to an expert observer that the prediction algorithm declares to be a "pass". Misses are a serious problem, since they represent problems that will not be seen by the systems designers. On the other hand, "false alarms" correspond to pages that an expert observer would declare to be of acceptable quality, but which are flagged by the prediction algorithm as "fails". In a typical printer testing and development scenario, such pages would be examined by an expert, and found to be of acceptable quality after all. "False alarm" pages result in extra pages to be examined by expert observers, which increases labor cost. But "false alarms" are not nearly as catastrophic as "misses", which represent potentially serious problems that are never seen by the systems developers. This scenario motivates us to develop a machine learning framework that will achieve the minimum "false alarm" rate subject to a specified "miss" rate. In order to construct such a set of receiver operating characteristic2 (ROC) curves, we examine various tools for the prediction, ranging from an exhaustive search over the space of the nonlinear discriminants to a Cost-Sentitive SVM3 framework. We then compare the curves gained from those methods. Our work shows promise for applying a standard framework to obtain a full ROC curve when it comes to tackling other machine learning problems in industry.
Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations
NASA Astrophysics Data System (ADS)
Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.
2010-11-01
We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.
NASA Technical Reports Server (NTRS)
Burke, Gary R.; Taft, Stephanie
2004-01-01
State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
NASA Astrophysics Data System (ADS)
Akhavan Niaki, Farbod
The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the mechanisms of tool failure were first identified and, based on the rapid catastrophic failure of the tool, a Bayesian inference method (i.e., Markov Chain Monte Carlo, MCMC) was used for parameter calibration of tool wear using a power mechanistic model. The calibrated model was then used in the state space probabilistic framework of a Kalman filter to estimate the tool flank wear. Furthermore, an on-machine laser measuring system was utilized and fused into the Kalman filter to improve the estimation accuracy. In the turning operation the behavior of progressive wear was investigated as well. Due to the nonlinear nature of wear in turning, an extended Kalman filter was designed for tracking progressive wear, and the results of the probabilistic-based method were compared with a deterministic technique, where significant improvement (more than 60% increase in estimation accuracy) was achieved. To fulfill the second objective of this research in understanding the underlying effects of wear on part quality in cutting nickel-based superalloys, a comprehensive study on surface roughness, dimensional integrity and residual stress was conducted. The estimated results derived from a probabilistic filter were used for finding the proper correlations between wear, surface roughness and dimensional integrity, along with a finite element simulation for predicting the residual stress profile for sharp and worn cutting tool conditions. The output of this research provides the essential information on condition monitoring of the tool and its effects on product quality. The low-cost Hall effect sensor used in this work to capture spindle power in the context of the stochastic filter can effectively estimate tool wear in both milling and turning operations, while the estimated wear can be used to generate knowledge of the state of workpiece surface integrity. Therefore the true functionality and efficiency of the tool in superalloy machining can be evaluated without additional high-cost sensing.
SVM2Motif—Reconstructing Overlapping DNA Sequence Motifs by Mimicking an SVM Predictor
Vidovic, Marina M. -C.; Görnitz, Nico; Müller, Klaus-Robert; Rätsch, Gunnar; Kloft, Marius
2015-01-01
Identifying discriminative motifs underlying the functionality and evolution of organisms is a major challenge in computational biology. Machine learning approaches such as support vector machines (SVMs) achieve state-of-the-art performances in genomic discrimination tasks, but—due to its black-box character—motifs underlying its decision function are largely unknown. As a remedy, positional oligomer importance matrices (POIMs) allow us to visualize the significance of position-specific subsequences. Although being a major step towards the explanation of trained SVM models, they suffer from the fact that their size grows exponentially in the length of the motif, which renders their manual inspection feasible only for comparably small motif sizes, typically k ≤ 5. In this work, we extend the work on positional oligomer importance matrices, by presenting a new machine-learning methodology, entitled motifPOIM, to extract the truly relevant motifs—regardless of their length and complexity—underlying the predictions of a trained SVM model. Our framework thereby considers the motifs as free parameters in a probabilistic model, a task which can be phrased as a non-convex optimization problem. The exponential dependence of the POIM size on the oligomer length poses a major numerical challenge, which we address by an efficient optimization framework that allows us to find possibly overlapping motifs consisting of up to hundreds of nucleotides. We demonstrate the efficacy of our approach on a synthetic data set as well as a real-world human splice site data set. PMID:26690911
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
On the decomposition of synchronous state mechines using sequence invariant state machines
NASA Technical Reports Server (NTRS)
Hebbalalu, K.; Whitaker, S.; Cameron, K.
1992-01-01
This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.
Diagnosis of delay-deadline failures in real time discrete event models.
Biswas, Santosh; Sarkar, Dipankar; Bhowal, Prodip; Mukhopadhyay, Siddhartha
2007-10-01
In this paper a method for fault detection and diagnosis (FDD) of real time systems has been developed. A modeling framework termed as real time discrete event system (RTDES) model is presented and a mechanism for FDD of the same has been developed. The use of RTDES framework for FDD is an extension of the works reported in the discrete event system (DES) literature, which are based on finite state machines (FSM). FDD of RTDES models are suited for real time systems because of their capability of representing timing faults leading to failures in terms of erroneous delays and deadlines, which FSM-based ones cannot address. The concept of measurement restriction of variables is introduced for RTDES and the consequent equivalence of states and indistinguishability of transitions have been characterized. Faults are modeled in terms of an unmeasurable condition variable in the state map. Diagnosability is defined and the procedure of constructing a diagnoser is provided. A checkable property of the diagnoser is shown to be a necessary and sufficient condition for diagnosability. The methodology is illustrated with an example of a hydraulic cylinder.
A general-purpose machine learning framework for predicting properties of inorganic materials
Ward, Logan; Agrawal, Ankit; Choudhary, Alok; ...
2016-08-26
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
A general-purpose machine learning framework for predicting properties of inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan; Agrawal, Ankit; Choudhary, Alok
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikushima, K; Arimura, H; Jin, Z
Purpose: In radiation treatment planning, delineation of gross tumor volume (GTV) is very important, because the GTVs affect the accuracies of radiation therapy procedure. To assist radiation oncologists in the delineation of GTV regions while treatment planning for lung cancer, we have proposed a machine-learning-based delineation framework of GTV regions of solid and ground glass opacity (GGO) lung tumors following by optimum contour selection (OCS) method. Methods: Our basic idea was to feed voxel-based image features around GTV contours determined by radiation oncologists into a machine learning classifier in the training step, after which the classifier produced the degree ofmore » GTV for each voxel in the testing step. Ten data sets of planning CT and PET/CT images were selected for this study. The support vector machine (SVM), which learned voxel-based features which include voxel value and magnitudes of image gradient vector that obtained from each voxel in the planning CT and PET/CT images, extracted initial GTV regions. The final GTV regions were determined using the OCS method that was able to select a global optimum object contour based on multiple active delineations with a level set method around the GTV. To evaluate the results of proposed framework for ten cases (solid:6, GGO:4), we used the three-dimensional Dice similarity coefficient (DSC), which denoted the degree of region similarity between the GTVs delineated by radiation oncologists and the proposed framework. Results: The proposed method achieved an average three-dimensional DSC of 0.81 for ten lung cancer patients, while a standardized uptake value-based method segmented GTV regions with the DSC of 0.43. The average DSCs for solid and GGO were 0.84 and 0.76, respectively, obtained by the proposed framework. Conclusion: The proposed framework with the support vector machine may be useful for assisting radiation oncologists in delineating solid and GGO lung tumors.« less
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
Kawata, Yasuo; Arimura, Hidetaka; Ikushima, Koujirou; Jin, Ze; Morita, Kento; Tokunaga, Chiaki; Yabu-Uchi, Hidetake; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki
2017-10-01
The aim of this study was to investigate the impact of pixel-based machine learning (ML) techniques, i.e., fuzzy-c-means clustering method (FCM), and the artificial neural network (ANN) and support vector machine (SVM), on an automated framework for delineation of gross tumor volume (GTV) regions of lung cancer for stereotactic body radiation therapy. The morphological and metabolic features for GTV regions, which were determined based on the knowledge of radiation oncologists, were fed on a pixel-by-pixel basis into the respective FCM, ANN, and SVM ML techniques. Then, the ML techniques were incorporated into the automated delineation framework of GTVs followed by an optimum contour selection (OCS) method, which we proposed in a previous study. The three-ML-based frameworks were evaluated for 16 lung cancer cases (six solid, four ground glass opacity (GGO), six part-solid GGO) with the datasets of planning computed tomography (CT) and 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT images using the three-dimensional Dice similarity coefficient (DSC). DSC denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those estimated using the automated framework. The FCM-based framework achieved the highest DSCs of 0.79±0.06, whereas DSCs of the ANN-based and SVM-based frameworks were 0.76±0.14 and 0.73±0.14, respectively. The FCM-based framework provided the highest segmentation accuracy and precision without a learning process (lowest calculation cost). Therefore, the FCM-based framework can be useful for delineation of tumor regions in practical treatment planning. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Integrating Machine Learning into Space Operations
NASA Astrophysics Data System (ADS)
Kelly, K. G.
There are significant challenges with managing activities in space, which for the scope of this paper are primarily the identification of objects in orbit, maintaining accurate estimates of the orbits of those objects, detecting changes to those orbits, warning of possible collisions between objects and detection of anomalous behavior. The challenges come from the large amounts of data to be processed, which is often incomplete and noisy, limitations on the ability to influence objects in space and the overall strategic importance of space to national interests. The focus of this paper is on defining an approach to leverage the improved capabilities that are possible using state of the art machine learning in a way that empowers operations personnel without sacrificing the security and mission assurance associated with manual operations performed by trained personnel. There has been significant research in the development of algorithms and techniques for applying machine learning in this domain, but deploying new techniques into such a mission critical domain is difficult and time consuming. Establishing a common framework could improve the efficiency with which new techniques are integrated into operations and the overall effectiveness at providing improvements.
Approach to design neural cryptography: a generalized architecture and a heuristic rule.
Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen
2013-06-01
Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.
Stochastic thermodynamics, fluctuation theorems and molecular machines.
Seifert, Udo
2012-12-01
Stochastic thermodynamics as reviewed here systematically provides a framework for extending the notions of classical thermodynamics such as work, heat and entropy production to the level of individual trajectories of well-defined non-equilibrium ensembles. It applies whenever a non-equilibrium process is still coupled to one (or several) heat bath(s) of constant temperature. Paradigmatic systems are single colloidal particles in time-dependent laser traps, polymers in external flow, enzymes and molecular motors in single molecule assays, small biochemical networks and thermoelectric devices involving single electron transport. For such systems, a first-law like energy balance can be identified along fluctuating trajectories. For a basic Markovian dynamics implemented either on the continuum level with Langevin equations or on a discrete set of states as a master equation, thermodynamic consistency imposes a local-detailed balance constraint on noise and rates, respectively. Various integral and detailed fluctuation theorems, which are derived here in a unifying approach from one master theorem, constrain the probability distributions for work, heat and entropy production depending on the nature of the system and the choice of non-equilibrium conditions. For non-equilibrium steady states, particularly strong results hold like a generalized fluctuation-dissipation theorem involving entropy production. Ramifications and applications of these concepts include optimal driving between specified states in finite time, the role of measurement-based feedback processes and the relation between dissipation and irreversibility. Efficiency and, in particular, efficiency at maximum power can be discussed systematically beyond the linear response regime for two classes of molecular machines, isothermal ones such as molecular motors, and heat engines such as thermoelectric devices, using a common framework based on a cycle decomposition of entropy production.
NASA Astrophysics Data System (ADS)
Nesvold, E. R.; Erasmus, N.; Greenberg, A.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.
2017-02-01
We present a machine learning model that can predict which asteroid deflection technology would be most effective, given the likely population of impactors. Our model can help policy and funding agencies prioritize technology development.
Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.
Yilmaz, Ozgur
2015-12-01
This letter introduces a novel framework of reservoir computing that is capable of both connectionist machine intelligence and symbolic computation. A cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells, and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is shown to be capable of long-term memory, and it requires orders of magnitude less computation compared to echo state networks. As the focus of the letter, we suggest that binary reservoir feature vectors can be combined using Boolean operations as in hyperdimensional computing, paving a direct way for concept building and symbolic processing. To demonstrate the capability of the proposed system, we make analogies directly on image data by asking, What is the automobile of air?
NASA Astrophysics Data System (ADS)
Galitsky, Boris; Kovalerchuk, Boris
2006-04-01
We develop a software system Text Scanner for Emotional Distress (TSED) for helping to detect email messages which are suspicious of coming from people under strong emotional distress. It has been confirmed by multiple studies that terrorist attackers have experienced a substantial emotional distress at some points before committing a terrorist attack. Therefore, if an individual in emotional distress can be detected on the basis of email texts, some preventive measures can be taken. The proposed detection machinery is based on extraction and classification of emotional profiles from emails. An emotional profile is a formal representation of a sequence of emotional states through a textual discourse where communicative actions are attached to these emotional states. The issues of extraction of emotional profiles from text and reasoning about it are discussed and illustrated. We then develop an inductive machine learning and reasoning framework to relate an emotional profile to the class "Emotional distress" or "No emotional distress", given a training dataset where the class is assigned by an expert. TSED's machine learning is evaluated using the database of structured customer complaints.
Two-qubit quantum cloning machine and quantum correlation broadcasting
NASA Astrophysics Data System (ADS)
Kheirollahi, Azam; Mohammadi, Hamidreza; Akhtarshenas, Seyed Javad
2016-11-01
Due to the axioms of quantum mechanics, perfect cloning of an unknown quantum state is impossible. But since imperfect cloning is still possible, a question arises: "Is there an optimal quantum cloning machine?" Buzek and Hillery answered this question and constructed their famous B-H quantum cloning machine. The B-H machine clones the state of an arbitrary single qubit in an optimal manner and hence it is universal. Generalizing this machine for a two-qubit system is straightforward, but during this procedure, except for product states, this machine loses its universality and becomes a state-dependent cloning machine. In this paper, we propose some classes of optimal universal local quantum state cloners for a particular class of two-qubit systems, more precisely, for a class of states with known Schmidt basis. We then extend our machine to the case that the Schmidt basis of the input state is deviated from the local computational basis of the machine. We show that more local quantum coherence existing in the input state corresponds to less fidelity between the input and output states. Also we present two classes of a state-dependent local quantum copying machine. Furthermore, we investigate local broadcasting of two aspects of quantum correlations, i.e., quantum entanglement and quantum discord, defined, respectively, within the entanglement-separability paradigm and from an information-theoretic perspective. The results show that although quantum correlation is, in general, very fragile during the broadcasting procedure, quantum discord is broadcasted more robustly than quantum entanglement.
Modeling formalisms in Systems Biology
2011-01-01
Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422
Software development for teleroentgenogram analysis
NASA Astrophysics Data System (ADS)
Goshkoderov, A. A.; Khlebnikov, N. A.; Obabkov, I. N.; Serkov, K. V.; Gajniyarov, I. M.; Aliev, A. A.
2017-09-01
A framework for the analysis and calculation of teleroentgenograms was developed. Software development was carried out in the Department of Children's Dentistry and Orthodontics in Ural State Medical University. The software calculates the teleroentgenogram by the original method which was developed in this medical department. Program allows designing its own methods for calculating the teleroentgenograms by new methods. It is planned to use the technology of machine learning (Neural networks) in the software. This will help to make the process of calculating the teleroentgenograms easier because methodological points will be placed automatically.
Closing the Gap Between Specification and Programming: VDM++ and SCALA
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2011-01-01
We argue that a modern programming language such as Scala offers a level of succinctness, which makes it suitable for program and systems specification as well as for high-level programming. We illustrate this by comparing the language with the Vdm++ specification language. The comparison also identifies areas where Scala perhaps could be improved, inspired by Vdm++. We furthermore illustrate Scala's potential as a specification language by augmenting it with a combination of parameterized state machines and temporal logic, defined as a library, thereby forming an expressive but simple runtime verification framework.
Kagawa, Rina; Kawazoe, Yoshimasa; Ida, Yusuke; Shinohara, Emiko; Tanaka, Katsuya; Imai, Takeshi; Ohe, Kazuhiko
2017-07-01
Phenotyping is an automated technique that can be used to distinguish patients based on electronic health records. To improve the quality of medical care and advance type 2 diabetes mellitus (T2DM) research, the demand for T2DM phenotyping has been increasing. Some existing phenotyping algorithms are not sufficiently accurate for screening or identifying clinical research subjects. We propose a practical phenotyping framework using both expert knowledge and a machine learning approach to develop 2 phenotyping algorithms: one is for screening; the other is for identifying research subjects. We employ expert knowledge as rules to exclude obvious control patients and machine learning to increase accuracy for complicated patients. We developed phenotyping algorithms on the basis of our framework and performed binary classification to determine whether a patient has T2DM. To facilitate development of practical phenotyping algorithms, this study introduces new evaluation metrics: area under the precision-sensitivity curve (AUPS) with a high sensitivity and AUPS with a high positive predictive value. The proposed phenotyping algorithms based on our framework show higher performance than baseline algorithms. Our proposed framework can be used to develop 2 types of phenotyping algorithms depending on the tuning approach: one for screening, the other for identifying research subjects. We develop a novel phenotyping framework that can be easily implemented on the basis of proper evaluation metrics, which are in accordance with users' objectives. The phenotyping algorithms based on our framework are useful for extraction of T2DM patients in retrospective studies.
ERIC Educational Resources Information Center
Society of the Plastics Industry, Inc., Washington, DC.
Designed to guide training and curriculum development to prepare machine operators for the national certification exam, this publication identifies the important knowledge required for productive performance by a plastics machine operator. Introductory material discusses the rationale for a national standard, uses of the Body of Knowledge,…
Machine Vision Within The Framework Of Collective Neural Assemblies
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1990-03-01
The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.
Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques
NASA Technical Reports Server (NTRS)
Saha, Bhaskar; Goebel, kai
2007-01-01
Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.
Jonnalagadda, Siddhartha Reddy; Li, Dingcheng; Sohn, Sunghwan; Wu, Stephen Tze-Inn; Wagholikar, Kavishwar; Torii, Manabu; Liu, Hongfang
2012-01-01
This paper describes the coreference resolution system submitted by Mayo Clinic for the 2011 i2b2/VA/Cincinnati shared task Track 1C. The goal of the task was to construct a system that links the markables corresponding to the same entity. The task organizers provided progress notes and discharge summaries that were annotated with the markables of treatment, problem, test, person, and pronoun. We used a multi-pass sieve algorithm that applies deterministic rules in the order of preciseness and simultaneously gathers information about the entities in the documents. Our system, MedCoref, also uses a state-of-the-art machine learning framework as an alternative to the final, rule-based pronoun resolution sieve. The best system that uses a multi-pass sieve has an overall score of 0.836 (average of B(3), MUC, Blanc, and CEAF F score) for the training set and 0.843 for the test set. A supervised machine learning system that typically uses a single function to find coreferents cannot accommodate irregularities encountered in data especially given the insufficient number of examples. On the other hand, a completely deterministic system could lead to a decrease in recall (sensitivity) when the rules are not exhaustive. The sieve-based framework allows one to combine reliable machine learning components with rules designed by experts. Using relatively simple rules, part-of-speech information, and semantic type properties, an effective coreference resolution system could be designed. The source code of the system described is available at https://sourceforge.net/projects/ohnlp/files/MedCoref.
A task scheduler framework for self-powered wireless sensors.
Nordman, Mikael M
2003-10-01
The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.
Using Machine Learning to Advance Personality Assessment and Theory.
Bleidorn, Wiebke; Hopwood, Christopher James
2018-05-01
Machine learning has led to important advances in society. One of the most exciting applications of machine learning in psychological science has been the development of assessment tools that can powerfully predict human behavior and personality traits. Thus far, machine learning approaches to personality assessment have focused on the associations between social media and other digital records with established personality measures. The goal of this article is to expand the potential of machine learning approaches to personality assessment by embedding it in a more comprehensive construct validation framework. We review recent applications of machine learning to personality assessment, place machine learning research in the broader context of fundamental principles of construct validation, and provide recommendations for how to use machine learning to advance our understanding of personality.
Fast mental states decoding in mixed reality.
De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea
2014-01-01
The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.
Fast mental states decoding in mixed reality
De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F. M. J.; Birbaumer, Niels; Caria, Andrea
2014-01-01
The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR. PMID:25505878
Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines
NASA Technical Reports Server (NTRS)
Le, Martin; Zheng, Xin; Katanyoutant, Sunant
2008-01-01
Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.
New numerical approach for the modelling of machining applied to aeronautical structural parts
NASA Astrophysics Data System (ADS)
Rambaud, Pierrick; Mocellin, Katia
2018-05-01
The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.
Thread mapping using system-level model for shared memory multicores
NASA Astrophysics Data System (ADS)
Mitra, Reshmi
Exploring thread-to-core mapping options for a parallel application on a multicore architecture is computationally very expensive. For the same algorithm, the mapping strategy (MS) with the best response time may change with data size and thread counts. The primary challenge is to design a fast, accurate and automatic framework for exploring these MSs for large data-intensive applications. This is to ensure that the users can explore the design space within reasonable machine hours, without thorough understanding on how the code interacts with the platform. Response time is related to the cycles per instructions retired (CPI), taking into account both active and sleep states of the pipeline. This work establishes a hybrid approach, based on Markov Chain Model (MCM) and Model Tree (MT) for system-level steady state CPI prediction. It is designed for shared memory multicore processors with coarse-grained multithreading. The thread status is represented by the MCM states. The program characteristics are modeled as the transition probabilities, representing the system moving between active and suspended thread states. The MT model extrapolates these probabilities for the actual application size (AS) from the smaller AS performance. This aspect of the framework, along with, the use of mathematical expressions for the actual AS performance information, results in a tremendous reduction in the CPI prediction time. The framework is validated using an electromagnetics application. The average performance prediction error for steady state CPI results with 12 different MSs is less than 1%. The total run time of model is of the order of minutes, whereas the actual application execution time is in terms of days.
Within-brain classification for brain tumor segmentation.
Havaei, Mohammad; Larochelle, Hugo; Poulin, Philippe; Jodoin, Pierre-Marc
2016-05-01
In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction. We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain. As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond
NASA Astrophysics Data System (ADS)
van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.
2015-12-01
ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.
Campanella, Gabriele; Rajanna, Arjun R; Corsale, Lorraine; Schüffler, Peter J; Yagi, Yukako; Fuchs, Thomas J
2018-04-01
Pathology is on the verge of a profound change from an analog and qualitative to a digital and quantitative discipline. This change is mostly driven by the high-throughput scanning of microscope slides in modern pathology departments, reaching tens of thousands of digital slides per month. The resulting vast digital archives form the basis of clinical use in digital pathology and allow large scale machine learning in computational pathology. One of the most crucial bottlenecks of high-throughput scanning is quality control (QC). Currently, digital slides are screened manually to detected out-of-focus regions, to compensate for the limitations of scanner software. We present a solution to this problem by introducing a benchmark dataset for blur detection, an in-depth comparison of state-of-the art sharpness descriptors and their prediction performance within a random forest framework. Furthermore, we show that convolution neural networks, like residual networks, can be used to train blur detectors from scratch. We thoroughly evaluate the accuracy of feature based and deep learning based approaches for sharpness classification (99.74% accuracy) and regression (MSE 0.004) and additionally compare them to domain experts in a comprehensive human perception study. Our pipeline outputs spacial heatmaps enabling to quantify and localize blurred areas on a slide. Finally, we tested the proposed framework in the clinical setting and demonstrate superior performance over the state-of-the-art QC pipeline comprising commercial software and human expert inspection by reducing the error rate from 17% to 4.7%. Copyright © 2017. Published by Elsevier Ltd.
[Research on infrared safety protection system for machine tool].
Zhang, Shuan-Ji; Zhang, Zhi-Ling; Yan, Hui-Ying; Wang, Song-De
2008-04-01
In order to ensure personal safety and prevent injury accident in machine tool operation, an infrared machine tool safety system was designed with infrared transmitting-receiving module, memory self-locked relay and voice recording-playing module. When the operator does not enter the danger area, the system has no response. Once the operator's whole or part of body enters the danger area and shades the infrared beam, the system will alarm and output an control signal to the machine tool executive element, and at the same time, the system makes the machine tool emergency stop to prevent equipment damaged and person injured. The system has a module framework, and has many advantages including safety, reliability, common use, circuit simplicity, maintenance convenience, low power consumption, low costs, working stability, easy debugging, vibration resistance and interference resistance. It is suitable for being installed and used in different machine tools such as punch machine, pour plastic machine, digital control machine, armor plate cutting machine, pipe bending machine, oil pressure machine etc.
Hierarchy of Gambling Choices: A Framework for Examining EGM Gambling Environment Preferences.
Thorne, Hannah Briony; Rockloff, Matthew Justus; Langham, Erika; Li, En
2016-12-01
This paper presents the Hierarchy of Gambling Choices (HGC), which is a consumer-oriented framework for understanding the key environmental and contextual features that influence peoples' selections of online and venue-based electronic gaming machines (EGMs). The HGC framework proposes that EGM gamblers make choices in selection of EGM gambling experiences utilising Tversky's (Psychol Rev 79(4):281-299, 1972). Elimination-by-Aspects model, and organise their choice in a hierarchical manner by virtue of EGMs being an "experience good" (Nelson in J Polit Econ 78(2):311-329, 1970). EGM features are divided into three levels: the platform-including, online, mobile or land-based; the provider or specific venue in which the gambling occurs; and the game or machine characteristics, such as graphical themes and bonus features. This framework will contribute to the gambling field by providing a manner in which to systematically explore the environment surrounding EGM gambling and how it affects behaviour.
A stochastic and dynamical view of pluripotency in mouse embryonic stem cells
Lee, Esther J.
2018-01-01
Pluripotent embryonic stem cells are of paramount importance for biomedical sciences because of their innate ability for self-renewal and differentiation into all major cell lines. The fateful decision to exit or remain in the pluripotent state is regulated by complex genetic regulatory networks. The rapid growth of single-cell sequencing data has greatly stimulated applications of statistical and machine learning methods for inferring topologies of pluripotency regulating genetic networks. The inferred network topologies, however, often only encode Boolean information while remaining silent about the roles of dynamics and molecular stochasticity inherent in gene expression. Herein we develop a framework for systematically extending Boolean-level network topologies into higher resolution models of networks which explicitly account for the promoter architectures and gene state switching dynamics. We show the framework to be useful for disentangling the various contributions that gene switching, external signaling, and network topology make to the global heterogeneity and dynamics of transcription factor populations. We find the pluripotent state of the network to be a steady state which is robust to global variations of gene switching rates which we argue are a good proxy for epigenetic states of individual promoters. The temporal dynamics of exiting the pluripotent state, on the other hand, is significantly influenced by the rates of genetic switching which makes cells more responsive to changes in extracellular signals. PMID:29451874
CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.
Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W
2010-09-01
Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.
Interaction with Machine Improvisation
NASA Astrophysics Data System (ADS)
Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo
We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.
Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Zhang, Caiyun
2015-06-01
Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Machine learning in sentiment reconstruction of the simulated stock market
NASA Astrophysics Data System (ADS)
Goykhman, Mikhail; Teimouri, Ali
2018-02-01
In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.
Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita
2017-08-13
Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
State machine analysis of sensor data from dynamic processes
Cook, William R.; Brabson, John M.; Deland, Sharon M.
2003-12-23
A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.
ERIC Educational Resources Information Center
Huang, Yifen
2010-01-01
Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…
Vision-Based People Detection System for Heavy Machine Applications
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-01
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838
Vision-Based People Detection System for Heavy Machine Applications.
Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick
2016-01-20
This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.
Machine learning properties of materials and molecules with entropy-regularized kernels
NASA Astrophysics Data System (ADS)
Ceriotti, Michele; Bartók, Albert; CsáNyi, GáBor; de, Sandip
Application of machine-learning methods to physics, chemistry and materials science is gaining traction as a strategy to obtain accurate predictions of the properties of matter at a fraction of the typical cost of quantum mechanical electronic structure calculations. In this endeavor, one can leverage general-purpose frameworks for supervised-learning. It is however very important that the input data - for instance the positions of atoms in a molecule or solid - is processed into a form that reflects all the underlying physical symmetries of the problem, and that possesses the regularity properties that are required by machine-learning algorithms. Here we introduce a general strategy to build a representation of this kind. We will start from existing approaches to compare local environments (basically, groups of atoms), and combine them using techniques borrowed from optimal transport theory, discussing the relation between this idea and additive energy decompositions. We will present a few examples demonstrating the potential of this approach as a tool to predict molecular and materials' properties with an accuracy on par with state-of-the-art electronic structure methods. MARVEL NCCR (Swiss National Science Foundation) and ERC StG HBMAP (European Research Council, G.A. 677013).
Sallis, James F.; Swartz, Michael D.; Hoelscher, Deanna M.; Peskin, Melissa F.
2013-01-01
Introduction Hospitals are the primary worksite of over 5 million adults in the United States, and millions of meals are procured and consumed in this setting. Because many worksite nutrition initiatives use an ecological framework to improve the dietary habits of employees, the nutrition values of foods served in hospitals is receiving attention. Methods This study used the Hospital Nutrition Environment Scan for Cafeterias, Vending Machines, and Gift Shops to quantitatively describe the consumer nutrition environments of 39 hospitals in Southern California. Data were collected by visiting each facility once from February 2012 through May 2012. Results On average, hospitals achieved only 29%, 33%, and less than 1% of the total possible points for their cafeteria, vending machines, and gift shops sections, respectively; overall, hospitals scored 25% of the total possible points. Large facility size and contracted food service operations were associated with some healthy practices in hospital cafeterias, but we found no association between these variables and the sectional or overall nutrition composite scores. Conclusion The average consumer nutrition environment of hospitals in this sample was minimally conducive to healthful eating. Nutrition-related interventions are warranted in hospital settings. PMID:23823699
A hybrid prognostic model for multistep ahead prediction of machine condition
NASA Astrophysics Data System (ADS)
Roulias, D.; Loutas, T. H.; Kostopoulos, V.
2012-05-01
Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.
Winston, Courtney P; Sallis, James F; Swartz, Michael D; Hoelscher, Deanna M; Peskin, Melissa F
2013-07-03
Hospitals are the primary worksite of over 5 million adults in the United States, and millions of meals are procured and consumed in this setting. Because many worksite nutrition initiatives use an ecological framework to improve the dietary habits of employees, the nutrition values of foods served in hospitals is receiving attention. This study used the Hospital Nutrition Environment Scan for Cafeterias, Vending Machines, and Gift Shops to quantitatively describe the consumer nutrition environments of 39 hospitals in Southern California. Data were collected by visiting each facility once from February 2012 through May 2012. On average, hospitals achieved only 29%, 33%, and less than 1% of the total possible points for their cafeteria, vending machines, and gift shops sections, respectively; overall, hospitals scored 25% of the total possible points. Large facility size and contracted food service operations were associated with some healthy practices in hospital cafeterias, but we found no association between these variables and the sectional or overall nutrition composite scores. The average consumer nutrition environment of hospitals in this sample was minimally conducive to healthful eating. Nutrition-related interventions are warranted in hospital settings.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
A Novel Dynamic Update Framework for Epileptic Seizure Prediction
Wang, Minghui; Hong, Xiaojun; Han, Jie
2014-01-01
Epileptic seizure prediction is a difficult problem in clinical applications, and it has the potential to significantly improve the patients' daily lives whose seizures cannot be controlled by either drugs or surgery. However, most current studies of epileptic seizure prediction focus on high sensitivity and low false-positive rate only and lack the flexibility for a variety of epileptic seizures and patients' physical conditions. Therefore, a novel dynamic update framework for epileptic seizure prediction is proposed in this paper. In this framework, two basic sample pools are constructed and updated dynamically. Furthermore, the prediction model can be updated to be the most appropriate one for the prediction of seizures' arrival. Mahalanobis distance is introduced in this part to solve the problem of side information, measuring the distance between two data sets. In addition, a multichannel feature extraction method based on Hilbert-Huang transform and extreme learning machine is utilized to extract the features of a patient's preseizure state against the normal state. At last, a dynamic update epileptic seizure prediction system is built up. Simulations on Freiburg database show that the proposed system has a better performance than the one without update. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices. PMID:25050381
A novel dynamic update framework for epileptic seizure prediction.
Han, Min; Ge, Sunan; Wang, Minghui; Hong, Xiaojun; Han, Jie
2014-01-01
Epileptic seizure prediction is a difficult problem in clinical applications, and it has the potential to significantly improve the patients' daily lives whose seizures cannot be controlled by either drugs or surgery. However, most current studies of epileptic seizure prediction focus on high sensitivity and low false-positive rate only and lack the flexibility for a variety of epileptic seizures and patients' physical conditions. Therefore, a novel dynamic update framework for epileptic seizure prediction is proposed in this paper. In this framework, two basic sample pools are constructed and updated dynamically. Furthermore, the prediction model can be updated to be the most appropriate one for the prediction of seizures' arrival. Mahalanobis distance is introduced in this part to solve the problem of side information, measuring the distance between two data sets. In addition, a multichannel feature extraction method based on Hilbert-Huang transform and extreme learning machine is utilized to extract the features of a patient's preseizure state against the normal state. At last, a dynamic update epileptic seizure prediction system is built up. Simulations on Freiburg database show that the proposed system has a better performance than the one without update. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.
Task-specific image partitioning.
Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D
2013-02-01
Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.
Ant-Based Cyber Defense (also known as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn Fink, PNNL
2015-09-29
ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2014-05-01
To achieve a rapid, simple and reliable parallel processing of different types of tasks and big data processing on any compute cluster, a lightweight messaging-based distributed applications processing and workflow execution framework model is proposed. The framework is based on Apache ActiveMQ and Simple (or Streaming) Text Oriented Message Protocol (STOMP). ActiveMQ , a popular and powerful open source persistence messaging and integration patterns server with scheduler capabilities, acts as a message broker in the framework. STOMP provides an interoperable wire format that allows framework programs to talk and interact between each other and ActiveMQ easily. In order to efficiently use the message broker a unified message and topic naming pattern is utilized to achieve the required operation. Only three Python programs and simple library, used to unify and simplify the implementation of activeMQ and STOMP protocol, are needed to use the framework. A watchdog program is used to monitor, remove, add, start and stop any machine and/or its different tasks when necessary. For every machine a dedicated one and only one zoo keeper program is used to start different functions or tasks, stompShell program, needed for executing the user required workflow. The stompShell instances are used to execute any workflow jobs based on received message. A well-defined, simple and flexible message structure, based on JavaScript Object Notation (JSON), is used to build any complex workflow systems. Also, JSON format is used in configuration, communication between machines and programs. The framework is platform independent. Although, the framework is built using Python the actual workflow programs or jobs can be implemented by any programming language. The generic framework can be used in small national data centres for processing seismological and radionuclide data received from the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). Also, it is possible to extend the use of the framework in monitoring the IDC pipeline. The detailed design, implementation,conclusion and future work of the proposed framework will be presented.
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
Li, Dingcheng; Sohn, Sunghwan; Wu, Stephen Tze-Inn; Wagholikar, Kavishwar; Torii, Manabu; Liu, Hongfang
2012-01-01
Objective This paper describes the coreference resolution system submitted by Mayo Clinic for the 2011 i2b2/VA/Cincinnati shared task Track 1C. The goal of the task was to construct a system that links the markables corresponding to the same entity. Materials and methods The task organizers provided progress notes and discharge summaries that were annotated with the markables of treatment, problem, test, person, and pronoun. We used a multi-pass sieve algorithm that applies deterministic rules in the order of preciseness and simultaneously gathers information about the entities in the documents. Our system, MedCoref, also uses a state-of-the-art machine learning framework as an alternative to the final, rule-based pronoun resolution sieve. Results The best system that uses a multi-pass sieve has an overall score of 0.836 (average of B3, MUC, Blanc, and CEAF F score) for the training set and 0.843 for the test set. Discussion A supervised machine learning system that typically uses a single function to find coreferents cannot accommodate irregularities encountered in data especially given the insufficient number of examples. On the other hand, a completely deterministic system could lead to a decrease in recall (sensitivity) when the rules are not exhaustive. The sieve-based framework allows one to combine reliable machine learning components with rules designed by experts. Conclusion Using relatively simple rules, part-of-speech information, and semantic type properties, an effective coreference resolution system could be designed. The source code of the system described is available at https://sourceforge.net/projects/ohnlp/files/MedCoref. PMID:22707745
SU-D-BRB-05: Quantum Learning for Knowledge-Based Response-Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Naqa, I; Ten, R
Purpose: There is tremendous excitement in radiotherapy about applying data-driven methods to develop personalized clinical decisions for real-time response-based adaptation. However, classical statistical learning methods lack in terms of efficiency and ability to predict outcomes under conditions of uncertainty and incomplete information. Therefore, we are investigating physics-inspired machine learning approaches by utilizing quantum principles for developing a robust framework to dynamically adapt treatments to individual patient’s characteristics and optimize outcomes. Methods: We studied 88 liver SBRT patients with 35 on non-adaptive and 53 on adaptive protocols. Adaptation was based on liver function using a split-course of 3+2 fractions with amore » month break. The radiotherapy environment was modeled as a Markov decision process (MDP) of baseline and one month into treatment states. The patient environment was modeled by a 5-variable state represented by patient’s clinical and dosimetric covariates. For comparison of classical and quantum learning methods, decision-making to adapt at one month was considered. The MDP objective was defined by the complication-free tumor control (P{sup +}=TCPx(1-NTCP)). A simple regression model represented state-action mapping. Single bit in classical MDP and a qubit of 2-superimposed states in quantum MDP represented the decision actions. Classical decision selection was done using reinforcement Q-learning and quantum searching was performed using Grover’s algorithm, which applies uniform superposition over possible states and yields quadratic speed-up. Results: Classical/quantum MDPs suggested adaptation (probability amplitude ≥0.5) 79% of the time for splitcourses and 100% for continuous-courses. However, the classical MDP had an average adaptation probability of 0.5±0.22 while the quantum algorithm reached 0.76±0.28. In cases where adaptation failed, classical MDP yielded 0.31±0.26 average amplitude while the quantum approach averaged a more optimistic 0.57±0.4, but with high phase fluctuations. Conclusion: Our results demonstrate that quantum machine learning approaches provide a feasible and promising framework for real-time and sequential clinical decision-making in adaptive radiotherapy.« less
Hybrid EEG-EOG brain-computer interface system for practical machine control.
Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid
2010-01-01
Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.
Using machine learning to emulate human hearing for predictive maintenance of equipment
NASA Astrophysics Data System (ADS)
Verma, Dinesh; Bent, Graham
2017-05-01
At the current time, interfaces between humans and machines use only a limited subset of senses that humans are capable of. The interaction among humans and computers can become much more intuitive and effective if we are able to use more senses, and create other modes of communicating between them. New machine learning technologies can make this type of interaction become a reality. In this paper, we present a framework for a holistic communication between humans and machines that uses all of the senses, and discuss how a subset of this capability can allow machines to talk to humans to indicate their health for various tasks such as predictive maintenance.
Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J
2014-09-01
Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.
Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin
2014-06-01
Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
NASA Astrophysics Data System (ADS)
Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin
2014-06-01
Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.
Radiation tolerant combinational logic cell
NASA Technical Reports Server (NTRS)
Maki, Gary R. (Inventor); Whitaker, Sterling (Inventor); Gambles, Jody W. (Inventor)
2009-01-01
A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.
Experimental Machine Learning of Quantum States
NASA Astrophysics Data System (ADS)
Gao, Jun; Qiao, Lu-Feng; Jiao, Zhi-Qiang; Ma, Yue-Chi; Hu, Cheng-Qiu; Ren, Ruo-Jing; Yang, Ai-Lin; Tang, Hao; Yung, Man-Hong; Jin, Xian-Min
2018-06-01
Quantum information technologies provide promising applications in communication and computation, while machine learning has become a powerful technique for extracting meaningful structures in "big data." A crossover between quantum information and machine learning represents a new interdisciplinary area stimulating progress in both fields. Traditionally, a quantum state is characterized by quantum-state tomography, which is a resource-consuming process when scaled up. Here we experimentally demonstrate a machine-learning approach to construct a quantum-state classifier for identifying the separability of quantum states. We show that it is possible to experimentally train an artificial neural network to efficiently learn and classify quantum states, without the need of obtaining the full information of the states. We also show how adding a hidden layer of neurons to the neural network can significantly boost the performance of the state classifier. These results shed new light on how classification of quantum states can be achieved with limited resources, and represent a step towards machine-learning-based applications in quantum information processing.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
A survey on adaptive engine technology for serious games
NASA Astrophysics Data System (ADS)
Rasim, Langi, Armein Z. R.; Munir, Rosmansyah, Yusep
2016-02-01
Serious Games has become a priceless tool in learning because it can simulate abstract concept to appear more realistic. The problem faced is that the players have different ability in playing the games. This causes the players to become frustrated if the game is too difficult or to get bored if it is too easy. Serious games have non-player character (NPC) in it. The NPC should be able to adapt to the players in such a way so that the players can feel comfortable in playing the games. Because of that, serious games development must involve an adaptive engine, which is by applying a learning machine that can adapt to different players. The development of adaptive engine can be viewed in terms of the frameworks and the algorithms. Frameworks include rules based, plan based, organization description based, proficiency of player based, and learning style and cognitive state based. Algorithms include agents based and non-agent based
Computational consciousness: building a self-preserving organism.
Barros, Allan Kardec
2010-01-01
Consciousness has been a subject of crescent interest among the neuroscience community. However, building machine models of it is quite challenging, as it involves many characteristics and properties of the human brain which are poorly defined or are very abstract. Here I propose to use information theory (IT) to give a mathematical framework to understand consciousness. For this reason, I used the term "computational". This work is grounded on some recent results on the use of IT to understand how the cortex codes information, where redundancy reduction plays a fundamental role. Basically, I propose a system, here called "organism", whose strategy is to extract the maximal amount of information from the environment in order to survive. To highlight the proposed framework, I show a simple organism composed of a single neuron which adapts itself to the outside dynamics by taking into account its internal state, whose perception is understood here to be related to "feelings".
Karim, Ahmad; Salleh, Rosli; Khan, Muhammad Khurram
2016-01-01
Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks’ back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps’ detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies. PMID:26978523
Karim, Ahmad; Salleh, Rosli; Khan, Muhammad Khurram
2016-01-01
Botnet phenomenon in smartphones is evolving with the proliferation in mobile phone technologies after leaving imperative impact on personal computers. It refers to the network of computers, laptops, mobile devices or tablets which is remotely controlled by the cybercriminals to initiate various distributed coordinated attacks including spam emails, ad-click fraud, Bitcoin mining, Distributed Denial of Service (DDoS), disseminating other malwares and much more. Likewise traditional PC based botnet, Mobile botnets have the same operational impact except the target audience is particular to smartphone users. Therefore, it is import to uncover this security issue prior to its widespread adaptation. We propose SMARTbot, a novel dynamic analysis framework augmented with machine learning techniques to automatically detect botnet binaries from malicious corpus. SMARTbot is a component based off-device behavioral analysis framework which can generate mobile botnet learning model by inducing Artificial Neural Networks' back-propagation method. Moreover, this framework can detect mobile botnet binaries with remarkable accuracy even in case of obfuscated program code. The results conclude that, a classifier model based on simple logistic regression outperform other machine learning classifier for botnet apps' detection, i.e 99.49% accuracy is achieved. Further, from manual inspection of botnet dataset we have extracted interesting trends in those applications. As an outcome of this research, a mobile botnet dataset is devised which will become the benchmark for future studies.
Cognitive learning: a machine learning approach for automatic process characterization from design
NASA Astrophysics Data System (ADS)
Foucher, J.; Baderot, J.; Martinez, S.; Dervilllé, A.; Bernard, G.
2018-03-01
Cutting edge innovation requires accurate and fast process-control to obtain fast learning rate and industry adoption. Current tools available for such task are mainly manual and user dependent. We present in this paper cognitive learning, which is a new machine learning based technique to facilitate and to speed up complex characterization by using the design as input, providing fast training and detection time. We will focus on the machine learning framework that allows object detection, defect traceability and automatic measurement tools.
A control technology evaluation of state-of-the-art, perchloroethylene dry-cleaning machines.
Earnest, G Scott
2002-05-01
NIOSH researchers evaluated the ability of fifth-generation dry-cleaning machines to control occupational exposure to perchloroethylene (PERC). Use of these machines is mandated in some countries; however, less than 1 percent of all U.S. shops have them. A study was conducted at a U.S. dry-cleaning shop where two fifth-generation machines were used. Both machines had a refrigerated condenser as a primary control and a carbon adsorber as a secondary control to recover PERC vapors during the dry cycle. These machines were designed to lower the PERC concentration in the cylinder at the end of the dry cycle to below 290 ppm. A single-beam infrared photometer continuously monitors the PERC concentration in the machine cylinder, and a door interlock prevents opening until the concentration is below 290 ppm. Personal breathing zone air samples were measured for the machine operator and presser. The operator had time-weighted average (TWA) PERC exposures that were less than 2 ppm. Highest exposures occurred during loading and unloading the machine and when performing routine machine maintenance. All presser samples were below the limit of detection. Real-time video exposure monitoring showed that the operator had peak exposures near 160 ppm during loading and unloading the machine (below the OSHA maximum of 300 ppm). This exposure (160 ppm) is an order of magnitude lower than exposures with more traditional machines that are widely used in the United States. The evaluated machines were very effective at reducing TWA PERC exposures as well as peak exposures that occur during machine loading and unloading. State-of-the-art dry-cleaning machines equipped with refrigerated condensers, carbon adsorbers, drum monitors, and door interlocks can provide substantially better protection than more traditional machines that are widely used in the United States.
Kuriyama, Soichi; Terui, Yuichi; Higuchi, Daisuke; Goto, Daisuke; Hotta, Yasuhiro; Manabe, Atsufumi; Miyazaki, Takashi
2011-01-01
A novel method was developed to fabricate all-ceramic restorations which comprised CAD/CAM-fabricated machinable ceramic bonded to CAD/CAM-fabricated zirconia framework using resin cement. The feasibility of this fabrication method was assessed in this study by investigating the bonding strength of a machinable ceramic to zirconia. A machinable ceramic was bonded to a zirconia plate using three kinds of resin cements: ResiCem (RE), Panavia (PA), and Multilink (ML). Conventional porcelain-fused-to-zirconia specimens were also prepared to serve as control. Shear bond strength test (SBT) and Schwickerath crack initiation test (SCT) were carried out. SBT revealed that PA (40.42 MPa) yielded a significantly higher bonding strength than RE (28.01 MPa) and ML (18.89 MPa). SCT revealed that the bonding strengths of test groups using resin cement were significantly higher than those of Control. Notably, the bonding strengths of RE and ML were above 25 MPa even after 10,000 times of thermal cycling -adequately meeting the ISO 9693 standard for metal-ceramic restorations. These results affirmed the feasibility of the novel fabrication method, in that a CAD/CAM-fabricated machinable ceramic is bonded to a CAD/CAM-fabricated zirconia framework using a resin cement.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
Roetker, Nicholas S; Page, C David; Yonker, James A; Chang, Vicky; Roan, Carol L; Herd, Pamela; Hauser, Taissa S; Hauser, Robert M; Atwood, Craig S
2013-10-01
We examined depression within a multidimensional framework consisting of genetic, environmental, and sociobehavioral factors and, using machine learning algorithms, explored interactions among these factors that might better explain the etiology of depressive symptoms. We measured current depressive symptoms using the Center for Epidemiologic Studies Depression Scale (n = 6378 participants in the Wisconsin Longitudinal Study). Genetic factors were 78 single nucleotide polymorphisms (SNPs); environmental factors-13 stressful life events (SLEs), plus a composite proportion of SLEs index; and sociobehavioral factors-18 personality, intelligence, and other health or behavioral measures. We performed traditional SNP associations via logistic regression likelihood ratio testing and explored interactions with support vector machines and Bayesian networks. After correction for multiple testing, we found no significant single genotypic associations with depressive symptoms. Machine learning algorithms showed no evidence of interactions. Naïve Bayes produced the best models in both subsets and included only environmental and sociobehavioral factors. We found no single or interactive associations with genetic factors and depressive symptoms. Various environmental and sociobehavioral factors were more predictive of depressive symptoms, yet their impacts were independent of one another. A genome-wide analysis of genetic alterations using machine learning methodologies will provide a framework for identifying genetic-environmental-sociobehavioral interactions in depressive symptoms.
Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Feng, Cong; Cui, Mingjian
Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less
Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research
Degenhart, Alan D.; Kelly, John W.; Ashmore, Robin C.; Collinger, Jennifer L.; Tyler-Kabara, Elizabeth C.; Weber, Douglas J.; Wang, Wei
2011-01-01
This paper presents “Craniux,” an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development. PMID:21687575
Craniux: a LabVIEW-based modular software framework for brain-machine interface research.
Degenhart, Alan D; Kelly, John W; Ashmore, Robin C; Collinger, Jennifer L; Tyler-Kabara, Elizabeth C; Weber, Douglas J; Wang, Wei
2011-01-01
This paper presents "Craniux," an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.
An Extended Proof-Carrying Code Framework for Security Enforcement
NASA Astrophysics Data System (ADS)
Pirzadeh, Heidar; Dubé, Danny; Hamou-Lhadj, Abdelwahab
The rapid growth of the Internet has resulted in increased attention to security to protect users from being victims of security threats. In this paper, we focus on security mechanisms that are based on Proof-Carrying Code (PCC) techniques. In a PCC system, a code producer sends a code along with its safety proof to the consumer. The consumer executes the code only if the proof is valid. Although PCC has been shown to be a useful security framework, it suffers from the sheer size of typical proofs -proofs of even small programs can be considerably large. In this paper, we propose an extended PCC framework (EPCC) in which, instead of the proof, a proof generator for the program in question is transmitted. This framework enables the execution of the proof generator and the recovery of the proof on the consumer's side in a secure manner using a newly created virtual machine called the VEP (Virtual Machine for Extended PCC).
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Machine learning assembly landscapes from particle tracking data.
Long, Andrew W; Zhang, Jie; Granick, Steve; Ferguson, Andrew L
2015-11-07
Bottom-up self-assembly offers a powerful route for the fabrication of novel structural and functional materials. Rational engineering of self-assembling systems requires understanding of the accessible aggregation states and the structural assembly pathways. In this work, we apply nonlinear machine learning to experimental particle tracking data to infer low-dimensional assembly landscapes mapping the morphology, stability, and assembly pathways of accessible aggregates as a function of experimental conditions. To the best of our knowledge, this represents the first time that collective order parameters and assembly landscapes have been inferred directly from experimental data. We apply this technique to the nonequilibrium self-assembly of metallodielectric Janus colloids in an oscillating electric field, and quantify the impact of field strength, oscillation frequency, and salt concentration on the dominant assembly pathways and terminal aggregates. This combined computational and experimental framework furnishes new understanding of self-assembling systems, and quantitatively informs rational engineering of experimental conditions to drive assembly along desired aggregation pathways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Nagle, Nicholas N; Piburn, Jesse O
As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in modelmore » outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.« less
NASA Technical Reports Server (NTRS)
Al-Jaar, Robert Y.; Desrochers, Alan A.
1989-01-01
The main objective of this research is to develop a generic modeling methodology with a flexible and modular framework to aid in the design and performance evaluation of integrated manufacturing systems using a unified model. After a thorough examination of the available modeling methods, the Petri Net approach was adopted. The concurrent and asynchronous nature of manufacturing systems are easily captured by Petri Net models. Three basic modules were developed: machine, buffer, and Decision Making Unit. The machine and buffer modules are used for modeling transfer lines and production networks. The Decision Making Unit models the functions of a computer node in a complex Decision Making Unit Architecture. The underlying model is a Generalized Stochastic Petri Net (GSPN) that can be used for performance evaluation and structural analysis. GSPN's were chosen because they help manage the complexity of modeling large manufacturing systems. There is no need to enumerate all the possible states of the Markov Chain since they are automatically generated from the GSPN model.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
Robust evaluation of time series classification algorithms for structural health monitoring
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
A functional-based segmentation of human body scans in arbitrary postures.
Werghi, Naoufel; Xiao, Yijun; Siebert, Jan Paul
2006-02-01
This paper presents a general framework that aims to address the task of segmenting three-dimensional (3-D) scan data representing the human form into subsets which correspond to functional human body parts. Such a task is challenging due to the articulated and deformable nature of the human body. A salient feature of this framework is that it is able to cope with various body postures and is in addition robust to noise, holes, irregular sampling and rigid transformations. Although whole human body scanners are now capable of routinely capturing the shape of the whole body in machine readable format, they have not yet realized their potential to provide automatic extraction of key body measurements. Automated production of anthropometric databases is a prerequisite to satisfying the needs of certain industrial sectors (e.g., the clothing industry). This implies that in order to extract specific measurements of interest, whole body 3-D scan data must be segmented by machine into subsets corresponding to functional human body parts. However, previously reported attempts at automating the segmentation process suffer from various limitations, such as being restricted to a standard specific posture and being vulnerable to scan data artifacts. Our human body segmentation algorithm advances the state of the art to overcome the above limitations and we present experimental results obtained using both real and synthetic data that confirm the validity, effectiveness, and robustness of our approach.
2010-02-01
multi-agent reputation management. State abstraction is a technique used to allow machine learning technologies to cope with problems that have large...state abstrac- tion process to enable reinforcement learning in domains with large state spaces. State abstraction is vital to machine learning ...across a collective of independent platforms. These individual elements, often referred to as agents in the machine learning community, should exhibit both
A deep learning and novelty detection framework for rapid phenotyping in high-content screening
Sommer, Christoph; Hoefler, Rudolf; Samwer, Matthias; Gerlich, Daniel W.
2017-01-01
Supervised machine learning is a powerful and widely used method for analyzing high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale screening data sets on nuclear and mitotic cell morphologies demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, which has broad implications for improved assay development in high-content screening. PMID:28954863
A Simple Universal Turing Machine for the Game of Life Turing Machine
NASA Astrophysics Data System (ADS)
Rendell, Paul
In this chapter we present a simple universal Turing machine which is small enough to fit into the design limits of the Turing machine build in Conway's Game of Life by the author. That limit is 8 symbols and 16 states. By way of comparison we also describe one of the smallest known universal Turing machines due to Rogozhin which has 6 symbols and 4 states.
Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques
2016-10-01
The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.
Turk, Dušan; Janjić, Vojko; Štern, Igor; Podobnik, Marjetka; Lamba, Doriano; Weis Dahl, Søren; Lauritzen, Connie; Pedersen, John; Turk, Vito; Turk, Boris
2001-01-01
Dipeptidyl peptidase I (DPPI) or cathepsin C is the physiological activator of groups of serine proteases from immune and inflammatory cells vital for defense of an organism. The structure presented shows how an additional domain transforms the framework of a papain-like endopeptidase into a robust oligomeric protease-processing enzyme. The tetrahedral arrangement of the active sites exposed to solvent allows approach of proteins in their native state; the massive body of the exclusion domain fastened within the tetrahedral framework excludes approach of a polypeptide chain apart from its termini; and the carboxylic group of Asp1 positions the N-terminal amino group of the substrate. Based on a structural comparison and interactions within the active site cleft, it is suggested that the exclusion domain originates from a metallo-protease inhibitor. The location of missense mutations, characterized in people suffering from Haim–Munk and Papillon–Lefevre syndromes, suggests how they disrupt the fold and function of the enzyme. PMID:11726493
ERIC Educational Resources Information Center
Hunt, Darwin P.
The use of systems theory as a conceptual framework is proposed as useful when considering computers as a machine component in teaching. Skinner's proposal that the label "computer" is inaccurate and counterproductive when used to refer to a machine being used for teaching is discussed. It is suggested that the alternative label…
The Couzens Machine. A Computerized Learning Exchange. Final Report, 1973-74.
ERIC Educational Resources Information Center
Davis, Ken, Comp.; Libengood, Richard, Comp.
The Couzens Machine is a computerized learning exchange and information service developed for the residents of Couzens Hall, a dormitory at the University of Michigan. Organized as a collective within the framework of a course and supported by an instructional development grant from the Center for Research on Learning and Teaching, the Couzens…
9. GENERAL INTERIOR VIEW OF THE VERTICAL FURNACE BUILDING (PART ...
9. GENERAL INTERIOR VIEW OF THE VERTICAL FURNACE BUILDING (PART OF MACHINE SHOP No. 2). TWO FURNACES, WITH THEIR SUPPORT FRAMEWORK, ARE VISIBLE TO THE RIGHT. THE TALL STRUCTURE IN THE CENTER TOWARD THE BACKGROUND IS THE VERTICAL QUENCH TOWER. - U.S. Steel Homestead Works, Machine Shop No. 2, Along Monongahela River, Homestead, Allegheny County, PA
A Discrete Approximation Framework for Hereditary Systems.
1980-05-01
schemes which are included in the general framework and which may be implemented directly on high-speed computing machines are developed. A numerical...an appropriately chosen Hilbert space. We then proceed to develop general approximation schemes for the solutions to the homogeneous AEE which in turn...rich classes of these schemes . In addition, two particular families of approximation schemes included in the general framework are developed and
Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks
NASA Astrophysics Data System (ADS)
Karpatne, A.; Kumar, V.
2017-12-01
Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.
Prioritizing individual genetic variants after kernel machine testing using variable selection.
He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C
2016-12-01
Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.
Motion Evaluation for Rehabilitation Training of the Disabled
NASA Astrophysics Data System (ADS)
Kim, Tae-Young; Park, Jun; Lim, Cheol-Su
In this paper, a motion evaluation technique for rehabilitation training is introduced. Motion recognition technologies have been developed for determining matching motions in the training set. However, we need to measure how well and how much of the motion has been followed for training motion evaluation. We employed a Finite State Machine as a framework of motion evaluation. For similarity analysis, we used weighted angular value differences although any template matching algorithm may be used. For robustness under illumination changes, IR LED's and cameras with IR-pass filter were used. Developed technique was successfully used for rehabilitation training of the disabled. Therapists appraised the system as practically useful.
DOT National Transportation Integrated Search
2016-07-31
This report presents a novel framework for promptly assessing the probability of barge-bridge : collision damage of piers based on probabilistic-based classification through machine learning. The main : idea of the presented framework is to divide th...
ERIC Educational Resources Information Center
Beggrow, Elizabeth P.; Ha, Minsu; Nehm, Ross H.; Pearl, Dennis; Boone, William J.
2014-01-01
The landscape of science education is being transformed by the new "Framework for Science Education" (National Research Council, "A framework for K-12 science education: practices, crosscutting concepts, and core ideas." The National Academies Press, Washington, DC, 2012), which emphasizes the centrality of scientific…
DOT National Transportation Integrated Search
2015-12-01
The goal of this research is to develop a machine learning framework to predict the spatiotemporal impact : of traffic accidents on the upstream traffic and surrounding region. The main objective of the framework : is, given a road accident, to forec...
Semantic Framework of Internet of Things for Smart Cities: Case Studies.
Zhang, Ningyu; Chen, Huajun; Chen, Xi; Chen, Jiaoyan
2016-09-14
In recent years, the advancement of sensor technology has led to the generation of heterogeneous Internet-of-Things (IoT) data by smart cities. Thus, the development and deployment of various aspects of IoT-based applications are necessary to mine the potential value of data to the benefit of people and their lives. However, the variety, volume, heterogeneity, and real-time nature of data obtained from smart cities pose considerable challenges. In this paper, we propose a semantic framework that integrates the IoT with machine learning for smart cities. The proposed framework retrieves and models urban data for certain kinds of IoT applications based on semantic and machine-learning technologies. Moreover, we propose two case studies: pollution detection from vehicles and traffic pattern detection. The experimental results show that our system is scalable and capable of accommodating a large number of urban regions with different types of IoT applications.
Semantic Framework of Internet of Things for Smart Cities: Case Studies
Zhang, Ningyu; Chen, Huajun; Chen, Xi; Chen, Jiaoyan
2016-01-01
In recent years, the advancement of sensor technology has led to the generation of heterogeneous Internet-of-Things (IoT) data by smart cities. Thus, the development and deployment of various aspects of IoT-based applications are necessary to mine the potential value of data to the benefit of people and their lives. However, the variety, volume, heterogeneity, and real-time nature of data obtained from smart cities pose considerable challenges. In this paper, we propose a semantic framework that integrates the IoT with machine learning for smart cities. The proposed framework retrieves and models urban data for certain kinds of IoT applications based on semantic and machine-learning technologies. Moreover, we propose two case studies: pollution detection from vehicles and traffic pattern detection. The experimental results show that our system is scalable and capable of accommodating a large number of urban regions with different types of IoT applications. PMID:27649185
Improved biliary detection and diagnosis through intelligent machine analysis.
Logeswaran, Rajasvaran
2012-09-01
This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
NASA Astrophysics Data System (ADS)
Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah
2018-03-01
The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.
Hardware support for software controlled fast multiplexing of performance counters
Salapura, Valentina; Wisniewski, Robert W
2013-10-01
Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.
Hardware support for software controlled fast multiplexing of performance counters
Salapura, Valentina; Wisniewski, Robert W.
2013-01-01
Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
ERIC Educational Resources Information Center
Maschietto, Michela
2015-01-01
This paper presents the analysis of two teaching experiments carried out in the context of the mathematics laboratory in a primary school (grades 3 and 4) with the use of the pascaline Zero + 1, an arithmetical machine. The teaching experiments are analysed by coordinating two theoretical frameworks, i.e. the instrumental approach and the Theory…
Toward a mathematical formalism of performance, task difficulty, and activation
NASA Technical Reports Server (NTRS)
Samaras, George M.
1988-01-01
The rudiments of a mathematical formalism for handling operational, physiological, and psychological concepts are developed for use by the man-machine system design engineer. The formalism provides a framework for developing a structured, systematic approach to the interface design problem, using existing mathematical tools, and simplifying the problem of telling a machine how to measure and use performance.
Cario, Clinton L; Witte, John S
2018-03-15
As whole-genome tumor sequence and biological annotation datasets grow in size, number and content, there is an increasing basic science and clinical need for efficient and accurate data management and analysis software. With the emergence of increasingly sophisticated data stores, execution environments and machine learning algorithms, there is also a need for the integration of functionality across frameworks. We present orchid, a python based software package for the management, annotation and machine learning of cancer mutations. Building on technologies of parallel workflow execution, in-memory database storage and machine learning analytics, orchid efficiently handles millions of mutations and hundreds of features in an easy-to-use manner. We describe the implementation of orchid and demonstrate its ability to distinguish tissue of origin in 12 tumor types based on 339 features using a random forest classifier. Orchid and our annotated tumor mutation database are freely available at https://github.com/wittelab/orchid. Software is implemented in python 2.7, and makes use of MySQL or MemSQL databases. Groovy 2.4.5 is optionally required for parallel workflow execution. JWitte@ucsf.edu. Supplementary data are available at Bioinformatics online.
Towards a framework of human factors certification of complex human-machine systems
NASA Technical Reports Server (NTRS)
Bukasa, Birgit
1994-01-01
As far as total automation is not realized, the combination of technical and social components in man-machine systems demands not only contributions from engineers but at least to an equal extent from behavioral scientists. This has been neglected far too long. The psychological, social and cultural aspects of technological innovations were almost totally overlooked. Yet, along with expected safety improvements the institutionalization of human factors is on the way. The introduction of human factors certification of complex man-machine systems will be a milestone in this process.
Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C
2007-01-01
A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.
Multicopy programmable discrimination of general qubit states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sentis, G.; Bagan, E.; Calsamiglia, J.
2010-10-15
Quantum state discrimination is a fundamental primitive in quantum statistics where one has to correctly identify the state of a system that is in one of two possible known states. A programmable discrimination machine performs this task when the pair of possible states is not a priori known but instead the two possible states are provided through two respective program ports. We study optimal programmable discrimination machines for general qubit states when several copies of states are available in the data or program ports. Two scenarios are considered: One in which the purity of the possible states is a priorimore » known, and the fully universal one where the machine operates over generic mixed states of unknown purity. We find analytical results for both the unambiguous and minimum error discrimination strategies. This allows us to calculate the asymptotic performance of programmable discrimination machines when a large number of copies are provided and to recover the standard state discrimination and state comparison values as different limiting cases.« less
Roetker, Nicholas S.; Yonker, James A.; Chang, Vicky; Roan, Carol L.; Herd, Pamela; Hauser, Taissa S.; Hauser, Robert M.
2013-01-01
Objectives. We examined depression within a multidimensional framework consisting of genetic, environmental, and sociobehavioral factors and, using machine learning algorithms, explored interactions among these factors that might better explain the etiology of depressive symptoms. Methods. We measured current depressive symptoms using the Center for Epidemiologic Studies Depression Scale (n = 6378 participants in the Wisconsin Longitudinal Study). Genetic factors were 78 single nucleotide polymorphisms (SNPs); environmental factors—13 stressful life events (SLEs), plus a composite proportion of SLEs index; and sociobehavioral factors—18 personality, intelligence, and other health or behavioral measures. We performed traditional SNP associations via logistic regression likelihood ratio testing and explored interactions with support vector machines and Bayesian networks. Results. After correction for multiple testing, we found no significant single genotypic associations with depressive symptoms. Machine learning algorithms showed no evidence of interactions. Naïve Bayes produced the best models in both subsets and included only environmental and sociobehavioral factors. Conclusions. We found no single or interactive associations with genetic factors and depressive symptoms. Various environmental and sociobehavioral factors were more predictive of depressive symptoms, yet their impacts were independent of one another. A genome-wide analysis of genetic alterations using machine learning methodologies will provide a framework for identifying genetic–environmental–sociobehavioral interactions in depressive symptoms. PMID:23927508
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan
2016-02-01
Accuracy plays a vital role in the medical field as it concerns with the life of an individual. Extensive research has been conducted on disease classification and prediction using machine learning techniques. However, there is no agreement on which classifier produces the best results. A specific classifier may be better than others for a specific dataset, but another classifier could perform better for some other dataset. Ensemble of classifiers has been proved to be an effective way to improve classification accuracy. In this research we present an ensemble framework with multi-layer classification using enhanced bagging and optimized weighting. The proposed model called "HM-BagMoov" overcomes the limitations of conventional performance bottlenecks by utilizing an ensemble of seven heterogeneous classifiers. The framework is evaluated on five different heart disease datasets, four breast cancer datasets, two diabetes datasets, two liver disease datasets and one hepatitis dataset obtained from public repositories. The analysis of the results show that ensemble framework achieved the highest accuracy, sensitivity and F-Measure when compared with individual classifiers for all the diseases. In addition to this, the ensemble framework also achieved the highest accuracy when compared with the state of the art techniques. An application named "IntelliHealth" is also developed based on proposed model that may be used by hospitals/doctors for diagnostic advice. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pullum, Laura L; Symons, Christopher T
2011-01-01
Machine learning is used in many applications, from machine vision to speech recognition to decision support systems, and is used to test applications. However, though much has been done to evaluate the performance of machine learning algorithms, little has been done to verify the algorithms or examine their failure modes. Moreover, complex learning frameworks often require stepping beyond black box evaluation to distinguish between errors based on natural limits on learning and errors that arise from mistakes in implementation. We present a conceptual architecture, failure model and taxonomy, and failure modes and effects analysis (FMEA) of a semi-supervised, multi-modal learningmore » system, and provide specific examples from its use in a radiological analysis assistant system. The goal of the research described in this paper is to provide a foundation from which dependability analysis of systems using semi-supervised, multi-modal learning can be conducted. The methods presented provide a first step towards that overall goal.« less
A Framework for Modeling Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Steven
Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less
Kernel Methods for Mining Instance Data in Ontologies
NASA Astrophysics Data System (ADS)
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
Agarwal, Shashank; Liu, Feifan; Yu, Hong
2011-10-03
Protein-protein interaction (PPI) is an important biomedical phenomenon. Automatically detecting PPI-relevant articles and identifying methods that are used to study PPI are important text mining tasks. In this study, we have explored domain independent features to develop two open source machine learning frameworks. One performs binary classification to determine whether the given article is PPI relevant or not, named "Simple Classifier", and the other one maps the PPI relevant articles with corresponding interaction method nodes in a standardized PSI-MI (Proteomics Standards Initiative-Molecular Interactions) ontology, named "OntoNorm". We evaluated our system in the context of BioCreative challenge competition using the standardized data set. Our systems are amongst the top systems reported by the organizers, attaining 60.8% F1-score for identifying relevant documents, and 52.3% F1-score for mapping articles to interaction method ontology. Our results show that domain-independent machine learning frameworks can perform competitively well at the tasks of detecting PPI relevant articles and identifying the methods that were used to study the interaction in such articles. Simple Classifier is available at http://sourceforge.net/p/simpleclassify/home/ and OntoNorm at http://sourceforge.net/p/ontonorm/home/.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Geveci, Berk
2014-11-01
The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-05-04
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.
Learning Physical Domains: Toward a Theoretical Framework.
ERIC Educational Resources Information Center
Forbus, Kenneth D.; Gentner, Dedre
People use and extend their knowledge of the physical world constantly. Understanding how this fluency is achieved would be an important milestone in understanding human learning and intelligence, as well as a useful guide for constructing machines that learn. This paper presents a theoretical framework that is being developed in an attempt to…
Improved analyses using function datasets and statistical modeling
John S. Hogland; Nathaniel M. Anderson
2014-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...
Toward a comprehensive landscape vegetation monitoring framework
NASA Astrophysics Data System (ADS)
Kennedy, Robert; Hughes, Joseph; Neeti, Neeti; Larrue, Tara; Gregory, Matthew; Roberts, Heather; Ohmann, Janet; Kane, Van; Kane, Jonathan; Hooper, Sam; Nelson, Peder; Cohen, Warren; Yang, Zhiqiang
2016-04-01
Blossoming Earth observation resources provide great opportunity to better understand land vegetation dynamics, but also require new techniques and frameworks to exploit their potential. Here, I describe several parallel projects that leverage time-series Landsat imagery to describe vegetation dynamics at regional and continental scales. At the core of these projects are the LandTrendr algorithms, which distill time-series earth observation data into periods of consistent long or short-duration dynamics. In one approach, we built an integrated, empirical framework to blend these algorithmically-processed time-series data with field data and lidar data to ascribe yearly change in forest biomass across the US states of Washington, Oregon, and California. In a separate project, we expanded from forest-only monitoring to full landscape land cover monitoring over the same regional scale, including both categorical class labels and continuous-field estimates. In these and other projects, we apply machine-learning approaches to ascribe all changes in vegetation to driving processes such as harvest, fire, urbanization, etc., allowing full description of both disturbance and recovery processes and drivers. Finally, we are moving toward extension of these same techniques to continental and eventually global scales using Google Earth Engine. Taken together, these approaches provide one framework for describing and understanding processes of change in vegetation communities at broad scales.
A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control
Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2014-01-01
Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Deep Learning Accurately Predicts Estrogen Receptor Status in Breast Cancer Metabolomics Data.
Alakwaa, Fadhl M; Chaudhary, Kumardeep; Garmire, Lana X
2018-01-05
Metabolomics holds the promise as a new technology to diagnose highly heterogeneous diseases. Conventionally, metabolomics data analysis for diagnosis is done using various statistical and machine learning based classification methods. However, it remains unknown if deep neural network, a class of increasingly popular machine learning methods, is suitable to classify metabolomics data. Here we use a cohort of 271 breast cancer tissues, 204 positive estrogen receptor (ER+), and 67 negative estrogen receptor (ER-) to test the accuracies of feed-forward networks, a deep learning (DL) framework, as well as six widely used machine learning models, namely random forest (RF), support vector machines (SVM), recursive partitioning and regression trees (RPART), linear discriminant analysis (LDA), prediction analysis for microarrays (PAM), and generalized boosted models (GBM). DL framework has the highest area under the curve (AUC) of 0.93 in classifying ER+/ER- patients, compared to the other six machine learning algorithms. Furthermore, the biological interpretation of the first hidden layer reveals eight commonly enriched significant metabolomics pathways (adjusted P-value <0.05) that cannot be discovered by other machine learning methods. Among them, protein digestion and absorption and ATP-binding cassette (ABC) transporters pathways are also confirmed in integrated analysis between metabolomics and gene expression data in these samples. In summary, deep learning method shows advantages for metabolomics based breast cancer ER status classification, with both the highest prediction accuracy (AUC = 0.93) and better revelation of disease biology. We encourage the adoption of feed-forward networks based deep learning method in the metabolomics research community for classification.
Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A
2017-01-01
Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265
NASA Astrophysics Data System (ADS)
Hoffmann, Achim; Mahidadia, Ashesh
The purpose of this chapter is to present fundamental ideas and techniques of machine learning suitable for the field of this book, i.e., for automated scientific discovery. The chapter focuses on those symbolic machine learning methods, which produce results that are suitable to be interpreted and understood by humans. This is particularly important in the context of automated scientific discovery as the scientific theories to be produced by machines are usually meant to be interpreted by humans. This chapter contains some of the most influential ideas and concepts in machine learning research to give the reader a basic insight into the field. After the introduction in Sect. 1, general ideas of how learning problems can be framed are given in Sect. 2. The section provides useful perspectives to better understand what learning algorithms actually do. Section 3 presents the Version space model which is an early learning algorithm as well as a conceptual framework, that provides important insight into the general mechanisms behind most learning algorithms. In section 4, a family of learning algorithms, the AQ family for learning classification rules is presented. The AQ family belongs to the early approaches in machine learning. The next, Sect. 5 presents the basic principles of decision tree learners. Decision tree learners belong to the most influential class of inductive learning algorithms today. Finally, a more recent group of learning systems are presented in Sect. 6, which learn relational concepts within the framework of logic programming. This is a particularly interesting group of learning systems since the framework allows also to incorporate background knowledge which may assist in generalisation. Section 7 discusses Association Rules - a technique that comes from the related field of Data mining. Section 8 presents the basic idea of the Naive Bayesian Classifier. While this is a very popular learning technique, the learning result is not well suited for human comprehension as it is essentially a large collection of probability values. In Sect. 9, we present a generic method for improving accuracy of a given learner by generatingmultiple classifiers using variations of the training data. While this works well in most cases, the resulting classifiers have significantly increased complexity and, hence, tend to destroy the human readability of the learning result that a single learner may produce. Section 10 contains a summary, mentions briefly other techniques not discussed in this chapter and presents outlook on the potential of machine learning in the future.
ERIC Educational Resources Information Center
BOLDT, MILTON; POKORNY, HARRY
THIRTY-THREE MACHINE SHOP INSTRUCTORS FROM 17 STATES PARTICIPATED IN AN 8-WEEK SEMINAR TO DEVELOP THE SKILLS AND KNOWLEDGE ESSENTIAL FOR TEACHING THE OPERATION OF NUMERICALLY CONTROLLED MACHINE TOOLS. THE SEMINAR WAS GIVEN FROM JUNE 20 TO AUGUST 12, 1966, WITH COLLEGE CREDIT AVAILABLE THROUGH STOUT STATE UNIVERSITY. THE PARTICIPANTS COMPLETED AN…
Towards a molecular logic machine
NASA Astrophysics Data System (ADS)
Remacle, F.; Levine, R. D.
2001-06-01
Finite state logic machines can be realized by pump-probe spectroscopic experiments on an isolated molecule. The most elaborate setup, a Turing machine, can be programmed to carry out a specific computation. We argue that a molecule can be similarly programmed, and provide examples using two photon spectroscopies. The states of the molecule serve as the possible states of the head of the Turing machine and the physics of the problem determines the possible instructions of the program. The tape is written in an alphabet that allows the listing of the different pump and probe signals that are applied in a given experiment. Different experiments using the same set of molecular levels correspond to different tapes that can be read and processed by the same head and program. The analogy to a Turing machine is not a mechanical one and is not completely molecular because the tape is not part of the molecular machine. We therefore also discuss molecular finite state machines, such as sequential devices, for which the tape is not part of the machine. Nonmolecular tapes allow for quite long input sequences with a rich alphabet (at the level of 7 bits) and laser pulse shaping experiments provide concrete examples. Single molecule spectroscopies show that a single molecule can be repeatedly cycled through a logical operation.
A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations
NASA Astrophysics Data System (ADS)
Patton, E. W.; West, P.; Rozell, E.; Zheng, J.
2010-12-01
There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.
NASA Technical Reports Server (NTRS)
Mandra, Salvatore
2017-01-01
We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.
Intelligent earthquake data processing for global adjoint tomography
NASA Astrophysics Data System (ADS)
Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.
2016-12-01
Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.
Sahoo, S.; Russo, T. A.; Elliott, J.; ...
2017-05-13
Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, S.; Russo, T. A.; Elliott, J.
Climate, groundwater extraction, and surface water flows have complex nonlinear relationships with groundwater level in agricultural regions. To better understand the relative importance of each driver and predict groundwater level change, we develop a new ensemble modeling framework based on spectral analysis, machine learning, and uncertainty analysis, as an alternative to complex and computationally expensive physical models. We apply and evaluate this new approach in the context of two aquifer systems supporting agricultural production in the United States: the High Plains aquifer (HPA) and the Mississippi River Valley alluvial aquifer (MRVA). We select input data sets by using a combinationmore » of mutual information, genetic algorithms, and lag analysis, and then use the selected data sets in a Multilayer Perceptron network architecture to simulate seasonal groundwater level change. As expected, model results suggest that irrigation demand has the highest influence on groundwater level change for a majority of the wells. The subset of groundwater observations not used in model training or cross-validation correlates strongly (R > 0.8) with model results for 88 and 83% of the wells in the HPA and MRVA, respectively. In both aquifer systems, the error in the modeled cumulative groundwater level change during testing (2003-2012) was less than 2 m over a majority of the area. Here, we conclude that our modeling framework can serve as an alternative approach to simulating groundwater level change and water availability, especially in regions where subsurface properties are unknown.« less
Machine Learning Applications to Resting-State Functional MR Imaging Analysis.
Billings, John M; Eder, Maxwell; Flood, William C; Dhami, Devendra Singh; Natarajan, Sriraam; Whitlow, Christopher T
2017-11-01
Machine learning is one of the most exciting and rapidly expanding fields within computer science. Academic and commercial research entities are investing in machine learning methods, especially in personalized medicine via patient-level classification. There is great promise that machine learning methods combined with resting state functional MR imaging will aid in diagnosis of disease and guide potential treatment for conditions thought to be impossible to identify based on imaging alone, such as psychiatric disorders. We discuss machine learning methods and explore recent advances. Copyright © 2017 Elsevier Inc. All rights reserved.
22 CFR 121.10 - Forgings, castings and machined bodies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Forgings, castings and machined bodies. 121.10... STATES MUNITIONS LIST Enumeration of Articles § 121.10 Forgings, castings and machined bodies. Articles on the U.S. Munitions List include articles in a partially completed state (such as forgings...
Olaya-Castro, Alexandra; Johnson, Neil F; Quiroga, Luis
2005-03-25
We propose a physically realizable machine which can either generate multiparticle W-like states, or implement high-fidelity 1-->M (M=1,2,...infinity) anticloning of an arbitrary qubit state, in a single step. This universal machine acts as a catalyst in that it is unchanged after either procedure, effectively resetting itself for its next operation. It possesses an inherent immunity to decoherence. Most importantly in terms of practical multiparty quantum communication, the machine's robustness in the presence of decoherence actually increases as the number of qubits M increases.
Drag Reduction of an Airfoil Using Deep Learning
NASA Astrophysics Data System (ADS)
Jiang, Chiyu; Sun, Anzhu; Marcus, Philip
2017-11-01
We reduced the drag of a 2D airfoil by starting with a NACA-0012 airfoil and used deep learning methods. We created a database which consists of simulations of 2D external flow over randomly generated shapes. We then developed a machine learning framework for external flow field inference given input shapes. Past work which utilized machine learning in Computational Fluid Dynamics focused on estimations of specific flow parameters, but this work is novel in the inference of entire flow fields. We further showed that learned flow patterns are transferable to cases that share certain similarities. This study illustrates the prospects of deeper integration of data-based modeling into current CFD simulation frameworks for faster flow inference and more accurate flow modeling.
A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems
2016-03-01
insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of
Gholami, Behnood; Phan, Timothy S; Haddad, Wassim M; Cason, Andrew; Mullis, Jerry; Price, Levi; Bailey, James M
2018-06-01
- Acute respiratory failure is one of the most common problems encountered in intensive care units (ICU) and mechanical ventilation is the mainstay of supportive therapy for such patients. A mismatch between ventilator delivery and patient demand is referred to as patient-ventilator asynchrony (PVA). An important hurdle in addressing PVA is the lack of a reliable framework for continuously and automatically monitoring the patient and detecting various types of PVA. - The problem of replicating human expertise of waveform analysis for detecting cycling asynchrony (i.e., delayed termination, premature termination, or none) was investigated in a pilot study involving 11 patients in the ICU under invasive mechanical ventilation. A machine learning framework is used to detect cycling asynchrony based on waveform analysis. - A panel of five experts with experience in PVA evaluated a total of 1377 breath cycles from 11 mechanically ventilated critical care patients. The majority vote was used to label each breath cycle according to cycling asynchrony type. The proposed framework accurately detected the presence or absence of cycling asynchrony with sensitivity (specificity) of 89% (99%), 94% (98%), and 97% (93%) for delayed termination, premature termination, and no cycling asynchrony, respectively. The system showed strong agreement with human experts as reflected by the kappa coefficients of 0.90, 0.91, and 0.90 for delayed termination, premature termination, and no cycling asynchrony, respectively. - The pilot study establishes the feasibility of using a machine learning framework to provide waveform analysis equivalent to an expert human. Copyright © 2018 Elsevier Ltd. All rights reserved.
Learning of perceptual grouping for object segmentation on RGB-D data☆
Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus
2014-01-01
Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation. PMID:24478571
The New BaBar Data Reconstruction Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceseracciu, Antonio
2003-06-02
The BaBar experiment is characterized by extremely high luminosity, a complex detector, and a huge data volume, with increasing requirements each year. To fulfill these requirements a new control system has been designed and developed for the offline data reconstruction system. The new control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is activelymore » distributed, enforces the separation between different processing tiers by using different naming domains, and glues them together by dedicated brokers. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes this new control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 12 farms.« less
The BaBar Data Reconstruction Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceseracciu, A
2005-04-20
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less
The BaBar Data Reconstruction Control System
NASA Astrophysics Data System (ADS)
Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.
2005-08-01
The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
Non-Bayesian Optical Inference Machines
NASA Astrophysics Data System (ADS)
Kadar, Ivan; Eichmann, George
1987-01-01
In a recent paper, Eichmann and Caulfield) presented a preliminary exposition of optical learning machines suited for use in expert systems. In this paper, we extend the previous ideas by introducing learning as a means of reinforcement by information gathering and reasoning with uncertainty in a non-Bayesian framework2. More specifically, the non-Bayesian approach allows the representation of total ignorance (not knowing) as opposed to assuming equally likely prior distributions.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
34 CFR 395.17 - Suspension of designation as State licensing agency.
Code of Federal Regulations, 2013 CFR
2013-07-01
... lapse of a reasonable time, the Secretary is of the opinion that such failure to comply still continues... protection of Federal property on which vending machines subject to the requirements of § 395.32 are located in the State. Upon the suspension of such designation, vending machine income from vending machines...
34 CFR 395.17 - Suspension of designation as State licensing agency.
Code of Federal Regulations, 2012 CFR
2012-07-01
... lapse of a reasonable time, the Secretary is of the opinion that such failure to comply still continues... protection of Federal property on which vending machines subject to the requirements of § 395.32 are located in the State. Upon the suspension of such designation, vending machine income from vending machines...
34 CFR 395.17 - Suspension of designation as State licensing agency.
Code of Federal Regulations, 2014 CFR
2014-07-01
... lapse of a reasonable time, the Secretary is of the opinion that such failure to comply still continues... protection of Federal property on which vending machines subject to the requirements of § 395.32 are located in the State. Upon the suspension of such designation, vending machine income from vending machines...
34 CFR 395.17 - Suspension of designation as State licensing agency.
Code of Federal Regulations, 2010 CFR
2010-07-01
... lapse of a reasonable time, the Secretary is of the opinion that such failure to comply still continues... protection of Federal property on which vending machines subject to the requirements of § 395.32 are located in the State. Upon the suspension of such designation, vending machine income from vending machines...
34 CFR 395.17 - Suspension of designation as State licensing agency.
Code of Federal Regulations, 2011 CFR
2011-07-01
... lapse of a reasonable time, the Secretary is of the opinion that such failure to comply still continues... protection of Federal property on which vending machines subject to the requirements of § 395.32 are located in the State. Upon the suspension of such designation, vending machine income from vending machines...
Product design for energy reduction in concurrent engineering: An Inverted Pyramid Approach
NASA Astrophysics Data System (ADS)
Alkadi, Nasr M.
Energy factors in product design in concurrent engineering (CE) are becoming an emerging dimension for several reasons; (a) the rising interest in "green design and manufacturing", (b) the national energy security concerns and the dramatic increase in energy prices, (c) the global competition in the marketplace and global climate change commitments including carbon tax and emission trading systems, and (d) the widespread recognition of the need for sustainable development. This research presents a methodology for the intervention of energy factors in concurrent engineering product development process to significantly reduce the manufacturing energy requirement. The work presented here is the first attempt at integrating the design for energy in concurrent engineering framework. It adds an important tool to the DFX toolbox for evaluation of the impact of design decisions on the product manufacturing energy requirement early during the design phase. The research hypothesis states that "Product Manufacturing Energy Requirement is a Function of Design Parameters". The hypothesis was tested by conducting experimental work in machining and heat treating that took place at the manufacturing lab of the Industrial and Management Systems Engineering Department (IMSE) at West Virginia University (WVU) and at a major U.S steel manufacturing plant, respectively. The objective of the machining experiment was to study the effect of changing specific product design parameters (Material type and diameter) and process design parameters (metal removal rate) on a gear head lathe input power requirement through performing defined sets of machining experiments. The objective of the heat treating experiment was to study the effect of varying product charging temperature on the fuel consumption of a walking beams reheat furnace. The experimental work in both directions have revealed important insights into energy utilization in machining and heat-treating processes and its variance based on product, process, and system design parameters. In depth evaluation to how the design and manufacturing normally happen in concurrent engineering provided a framework to develop energy system levels in machining within the concurrent engineering environment using the method of "Inverted Pyramid Approach", (IPA). The IPA features varying levels of output energy based information depending on the input design parameters that is available during each stage (level) of the product design. The experimental work, the in-depth evaluation of design and manufacturing in CE, and the developed energy system levels in machining provided a solid base for the development of the model for the design for energy reduction in CE. The model was used to analyze an example part where 12 evolving designs were thoroughly reviewed to investigate the sensitivity of energy to design parameters in machining. The model allowed product design teams to address manufacturing energy concerns early during the design stage. As a result, ranges for energy sensitive design parameters impacting product manufacturing energy consumption were found in earlier levels. As designer proceeds to deeper levels in the model, this range tightens and results in significant energy reductions.
Solid-state resistor for pulsed power machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltzfus, Brian; Savage, Mark E.; Hutsel, Brian Thomas
2016-12-06
A flexible solid-state resistor comprises a string of ceramic resistors that can be used to charge the capacitors of a linear transformer driver (LTD) used in a pulsed power machine. The solid-state resistor is able to absorb the energy of a switch prefire, thereby limiting LTD cavity damage, yet has a sufficiently low RC charge time to allow the capacitor to be recharged without disrupting the operation of the pulsed power machine.
Security Frameworks for Machine-to-Machine Devices and Networks
NASA Astrophysics Data System (ADS)
Demblewski, Michael
Attacks against mobile systems have escalated over the past decade. There have been increases of fraud, platform attacks, and malware. The Internet of Things (IoT) offers a new attack vector for Cybercriminals. M2M contributes to the growing number of devices that use wireless systems for Internet connection. As new applications and platforms are created, old vulnerabilities are transferred to next-generation systems. There is a research gap that exists between the current approaches for security framework development and the understanding of how these new technologies are different and how they are similar. This gap exists because system designers, security architects, and users are not fully aware of security risks and how next-generation devices can jeopardize safety and personal privacy. Current techniques, for developing security requirements, do not adequately consider the use of new technologies, and this weakens countermeasure implementations. These techniques rely on security frameworks for requirements development. These frameworks lack a method for identifying next generation security concerns and processes for comparing, contrasting and evaluating non-human device security protections. This research presents a solution for this problem by offering a novel security framework that is focused on the study of the "functions and capabilities" of M2M devices and improves the systems development life cycle for the overall IoT ecosystem.
Development of techniques to enhance man/machine communication
NASA Technical Reports Server (NTRS)
Targ, R.; Cole, P.; Puthoff, H.
1974-01-01
A four-state random stimulus generator, considered to function as an ESP teaching machine was used to investigate an approach to facilitating interactions between man and machines. A subject tries to guess in which of four states the machine is. The machine offers the user feedback and reinforcement as to the correctness of his choice. Using this machine, 148 volunteer subjects were screened under various protocols. Several whose learning slope and/or mean score departed significantly from chance expectation were identified. Direct physiological evidence of perception of remote stimuli not presented to any known sense of the percipient using electroencephalographic (EEG) output when a light was flashed in a distant room was also studied.
Method and system for controlling a synchronous machine over full operating range
Walters, James E.; Gunawan, Fani S.; Xue, Yanhong
2002-01-01
System and method for controlling a synchronous machine are provided. The method allows for calculating a stator voltage index. The method further allows for relating the magnitude of the stator voltage index against a threshold voltage value. An offset signal is generated based on the results of the relating step. A respective state of operation of the machine is determined. The offset signal is processed based on the respective state of the machine.
The Design of Finite State Machine for Asynchronous Replication Protocol
NASA Astrophysics Data System (ADS)
Wang, Yanlong; Li, Zhanhuai; Lin, Wei; Hei, Minglei; Hao, Jianhua
Data replication is a key way to design a disaster tolerance system and to achieve reliability and availability. It is difficult for a replication protocol to deal with the diverse and complex environment. This means that data is less well replicated than it ought to be. To reduce data loss and to optimize replication protocols, we (1) present a finite state machine, (2) run it to manage an asynchronous replication protocol and (3) report a simple evaluation of the asynchronous replication protocol based on our state machine. It's proved that our state machine is applicable to guarantee the asynchronous replication protocol running in the proper state to the largest extent in the event of various possible events. It also can helpful to build up replication-based disaster tolerance systems to ensure the business continuity.
Automated robot-assisted surgical skill evaluation: Predictive analytics approach.
Fard, Mahtab J; Ameri, Sattar; Darin Ellis, R; Chinnam, Ratna B; Pandya, Abhilash K; Klein, Michael D
2018-02-01
Surgical skill assessment has predominantly been a subjective task. Recently, technological advances such as robot-assisted surgery have created great opportunities for objective surgical evaluation. In this paper, we introduce a predictive framework for objective skill assessment based on movement trajectory data. Our aim is to build a classification framework to automatically evaluate the performance of surgeons with different levels of expertise. Eight global movement features are extracted from movement trajectory data captured by a da Vinci robot for surgeons with two levels of expertise - novice and expert. Three classification methods - k-nearest neighbours, logistic regression and support vector machines - are applied. The result shows that the proposed framework can classify surgeons' expertise as novice or expert with an accuracy of 82.3% for knot tying and 89.9% for a suturing task. This study demonstrates and evaluates the ability of machine learning methods to automatically classify expert and novice surgeons using global movement features. Copyright © 2017 John Wiley & Sons, Ltd.
Free-energy simulations reveal molecular mechanism for functional switch of a DNA helicase
Ma, Wen; Whitley, Kevin D; Schulten, Klaus
2018-01-01
Helicases play key roles in genome maintenance, yet it remains elusive how these enzymes change conformations and how transitions between different conformational states regulate nucleic acid reshaping. Here, we developed a computational technique combining structural bioinformatics approaches and atomic-level free-energy simulations to characterize how the Escherichia coli DNA repair enzyme UvrD changes its conformation at the fork junction to switch its function from unwinding to rezipping DNA. The lowest free-energy path shows that UvrD opens the interface between two domains, allowing the bound ssDNA to escape. The simulation results predict a key metastable 'tilted' state during ssDNA strand switching. By simulating FRET distributions with fluorophores attached to UvrD, we show that the new state is supported quantitatively by single-molecule measurements. The present study deciphers key elements for the 'hyper-helicase' behavior of a mutant and provides an effective framework to characterize directly structure-function relationships in molecular machines. PMID:29664402
Free-energy simulations reveal molecular mechanism for functional switch of a DNA helicase.
Ma, Wen; Whitley, Kevin D; Chemla, Yann R; Luthey-Schulten, Zaida; Schulten, Klaus
2018-04-17
Helicases play key roles in genome maintenance, yet it remains elusive how these enzymes change conformations and how transitions between different conformational states regulate nucleic acid reshaping. Here, we developed a computational technique combining structural bioinformatics approaches and atomic-level free-energy simulations to characterize how the Escherichia coli DNA repair enzyme UvrD changes its conformation at the fork junction to switch its function from unwinding to rezipping DNA. The lowest free-energy path shows that UvrD opens the interface between two domains, allowing the bound ssDNA to escape. The simulation results predict a key metastable 'tilted' state during ssDNA strand switching. By simulating FRET distributions with fluorophores attached to UvrD, we show that the new state is supported quantitatively by single-molecule measurements. The present study deciphers key elements for the 'hyper-helicase' behavior of a mutant and provides an effective framework to characterize directly structure-function relationships in molecular machines. © 2018, Ma et al.
State Machine Modeling of the Space Launch System Solid Rocket Boosters
NASA Technical Reports Server (NTRS)
Harris, Joshua A.; Patterson-Hine, Ann
2013-01-01
The Space Launch System is a Shuttle-derived heavy-lift vehicle currently in development to serve as NASA's premiere launch vehicle for space exploration. The Space Launch System is a multistage rocket with two Solid Rocket Boosters and multiple payloads, including the Multi-Purpose Crew Vehicle. Planned Space Launch System destinations include near-Earth asteroids, the Moon, Mars, and Lagrange points. The Space Launch System is a complex system with many subsystems, requiring considerable systems engineering and integration. To this end, state machine analysis offers a method to support engineering and operational e orts, identify and avert undesirable or potentially hazardous system states, and evaluate system requirements. Finite State Machines model a system as a finite number of states, with transitions between states controlled by state-based and event-based logic. State machines are a useful tool for understanding complex system behaviors and evaluating "what-if" scenarios. This work contributes to a state machine model of the Space Launch System developed at NASA Ames Research Center. The Space Launch System Solid Rocket Booster avionics and ignition subsystems are modeled using MATLAB/Stateflow software. This model is integrated into a larger model of Space Launch System avionics used for verification and validation of Space Launch System operating procedures and design requirements. This includes testing both nominal and o -nominal system states and command sequences.
Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin
2018-01-01
The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699
Zheng, Shuai; Lu, James J; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A; Wang, Fusheng
2017-05-09
Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports-each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. IDEAL-X adopts a unique online machine learning-based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. ©Shuai Zheng, James J Lu, Nima Ghasemzadeh, Salim S Hayek, Arshed A Quyyumi, Fusheng Wang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 09.05.2017.
Semi-supervised protein subcellular localization.
Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang
2009-01-30
Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.
sw-SVM: sensor weighting support vector machines for EEG-based brain-computer interfaces.
Jrad, N; Congedo, M; Phlypo, R; Rousseau, S; Flamary, R; Yger, F; Rakotomamonjy, A
2011-10-01
In many machine learning applications, like brain-computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.
Salamone, Francesco; Belussi, Lorenzo; Currò, Cristian; Danza, Ludovico; Ghellere, Matteo; Guazzi, Giulia; Lenzi, Bruno; Megale, Valentino; Meroni, Italo
2018-05-17
Thermal comfort has become a topic issue in building performance assessment as well as energy efficiency. Three methods are mainly recognized for its assessment. Two of them based on standardized methodologies, face the problem by considering the indoor environment in steady-state conditions (PMV and PPD) and users as active subjects whose thermal perception is influenced by outdoor climatic conditions (adaptive approach). The latter method is the starting point to investigate thermal comfort from an overall perspective by considering endogenous variables besides the traditional physical and environmental ones. Following this perspective, the paper describes the results of an in-field investigation of thermal conditions through the use of nearable and wearable solutions, parametric models and machine learning techniques. The aim of the research is the exploration of the reliability of IoT-based solutions combined with advanced algorithms, in order to create a replicable framework for the assessment and improvement of user thermal satisfaction. For this purpose, an experimental test in real offices was carried out involving eight workers. Parametric models are applied for the assessment of thermal comfort; IoT solutions are used to monitor the environmental variables and the users' parameters; the machine learning CART method allows to predict the users' profile and the thermal comfort perception respect to the indoor environment.
Currò, Cristian; Danza, Ludovico; Ghellere, Matteo; Guazzi, Giulia; Lenzi, Bruno; Megale, Valentino; Meroni, Italo
2018-01-01
Thermal comfort has become a topic issue in building performance assessment as well as energy efficiency. Three methods are mainly recognized for its assessment. Two of them based on standardized methodologies, face the problem by considering the indoor environment in steady-state conditions (PMV and PPD) and users as active subjects whose thermal perception is influenced by outdoor climatic conditions (adaptive approach). The latter method is the starting point to investigate thermal comfort from an overall perspective by considering endogenous variables besides the traditional physical and environmental ones. Following this perspective, the paper describes the results of an in-field investigation of thermal conditions through the use of nearable and wearable solutions, parametric models and machine learning techniques. The aim of the research is the exploration of the reliability of IoT-based solutions combined with advanced algorithms, in order to create a replicable framework for the assessment and improvement of user thermal satisfaction. For this purpose, an experimental test in real offices was carried out involving eight workers. Parametric models are applied for the assessment of thermal comfort; IoT solutions are used to monitor the environmental variables and the users’ parameters; the machine learning CART method allows to predict the users’ profile and the thermal comfort perception respect to the indoor environment. PMID:29772818
Machinability of an experimental Ti-Ag alloy in terms of tool life in a dental CAD/CAM system.
Inagaki, Ryoichi; Kikuchi, Masafumi; Takahashi, Masatoshi; Takada, Yukyo; Sasaki, Keiichi
2015-01-01
Titanium is difficult to machine because of its intrinsic properties. In a previous study, the machinability of titanium was improved by alloying with silver. This study aimed to evaluate the durability of tungsten carbide burs after the fabrication of frameworks using a Ti-20%Ag alloy and titanium with a computer-aided design and computer-aided manufacturing system. There was a significant difference in attrition area ratio between the two metals. Compared with titanium, the ratio of the area of attrition of machining burs was significantly lower for the experimental Ti-20%Ag alloy. The difference in the area of attrition for titanium and Ti-20%Ag became remarkable with increasing number of machining operations. The results show that the same burs can be used for a longer time with Ti-20%Ag than with pure titanium. Therefore, in terms of tool life, the machinability of the Ti-20%Ag alloy is superior to that of titanium.
Neural-Network Quantum States, String-Bond States, and Chiral Topological States
NASA Astrophysics Data System (ADS)
Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio
2018-01-01
Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation.
Segreto, Tiziana; Caggiano, Alessandra; Karam, Sara; Teti, Roberto
2017-12-12
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions.
Vibration Sensor Monitoring of Nickel-Titanium Alloy Turning for Machinability Evaluation
Segreto, Tiziana; Karam, Sara; Teti, Roberto
2017-01-01
Nickel-Titanium (Ni-Ti) alloys are very difficult-to-machine materials causing notable manufacturing problems due to their unique mechanical properties, including superelasticity, high ductility, and severe strain-hardening. In this framework, the aim of this paper is to assess the machinability of Ni-Ti alloys with reference to turning processes in order to realize a reliable and robust in-process identification of machinability conditions. An on-line sensor monitoring procedure based on the acquisition of vibration signals was implemented during the experimental turning tests. The detected vibration sensorial data were processed through an advanced signal processing method in time-frequency domain based on wavelet packet transform (WPT). The extracted sensorial features were used to construct WPT pattern feature vectors to send as input to suitably configured neural networks (NNs) for cognitive pattern recognition in order to evaluate the correlation between input sensorial information and output machinability conditions. PMID:29231864
NASA Technical Reports Server (NTRS)
Riedel, Joseph E.; Grasso, Christopher A.
2012-01-01
VML (Virtual Machine Language) is an advanced computing environment that allows spacecraft to operate using mechanisms ranging from simple, time-oriented sequencing to advanced, multicomponent reactive systems. VML has developed in four evolutionary stages. VML 0 is a core execution capability providing multi-threaded command execution, integer data types, and rudimentary branching. VML 1 added named parameterized procedures, extensive polymorphism, data typing, branching, looping issuance of commands using run-time parameters, and named global variables. VML 2 added for loops, data verification, telemetry reaction, and an open flight adaptation architecture. VML 2.1 contains major advances in control flow capabilities for executable state machines. On the resource requirements front, VML 2.1 features a reduced memory footprint in order to fit more capability into modestly sized flight processors, and endian-neutral data access for compatibility with Intel little-endian processors. Sequence packaging has been improved with object-oriented programming constructs and the use of implicit (rather than explicit) time tags on statements. Sequence event detection has been significantly enhanced with multi-variable waiting, which allows a sequence to detect and react to conditions defined by complex expressions with multiple global variables. This multi-variable waiting serves as the basis for implementing parallel rule checking, which in turn, makes possible executable state machines. The new state machine feature in VML 2.1 allows the creation of sophisticated autonomous reactive systems without the need to develop expensive flight software. Users specify named states and transitions, along with the truth conditions required, before taking transitions. Transitions with the same signal name allow separate state machines to coordinate actions: the conditions distributed across all state machines necessary to arm a particular signal are evaluated, and once found true, that signal is raised. The selected signal then causes all identically named transitions in all present state machines to be taken simultaneously. VML 2.1 has relevance to all potential space missions, both manned and unmanned. It was under consideration for use on Orion.
NASA Astrophysics Data System (ADS)
Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.
2014-12-01
Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.
Measures and Metrics of Information Processing in Complex Systems: A Rope of Sand
ERIC Educational Resources Information Center
James, Ryan Gregory
2013-01-01
How much information do natural systems store and process? In this work we attempt to answer this question in multiple ways. We first establish a mathematical framework where natural systems are represented by a canonical form of edge-labeled hidden fc models called e-machines. Then, utilizing this framework, a variety of measures are defined and…
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks
2014-03-27
intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File
Quantum machine learning for quantum anomaly detection
NASA Astrophysics Data System (ADS)
Liu, Nana; Rebentrost, Patrick
2018-04-01
Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.
On some methods of discrete systems behaviour simulation
NASA Astrophysics Data System (ADS)
Sytnik, Alexander A.; Posohina, Natalia I.
1998-07-01
The project is solving one of the fundamental problems of mathematical cybernetics and discrete mathematics, the one connected with synthesis and analysis of managing systems, depending on the research of their functional opportunities and reliable behaviour. This work deals with the case of finite-state machine behaviour restoration when the structural redundancy is not available and the direct updating of current behaviour is impossible. The described below method, uses number theory to build a special model of finite-state machine, it is simulating the transition between the states of the finite-state machine using specially defined functions of exponential type with the help of several methods of number theory and algebra it is easy to determine, whether there is an opportunity to restore the behaviour (with the help of this method) in the given case or not and also derive the class of finite-state machines, admitting such restoration.
A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines
Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert
2012-01-01
We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845
Sequence-invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R.; Manjunath, Shamanna K.; Maki, Gary K.
1991-01-01
A synthesis method and an MOS VLSI architecture are presented to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. The design method utilizes binary tree structured (BTS) logic to implement regular and dense circuits. The desired state sequence can be hardwired with power supply connections or can be dynamically reallocated if stored in a register. This allows programmable VLSI controllers to be designed with a compact size and performance approaching that of dedicated logic. Results of ICV implementations are reported and an example sequence-invariant state machine is contrasted with implementations based on traditional methods.
The Machine / Job Features Mechanism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alef, M.; Cass, T.; Keijser, J. J.
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and themore » design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.« less
The machine/job features mechanism
NASA Astrophysics Data System (ADS)
Alef, M.; Cass, T.; Keijser, J. J.; McNab, A.; Roiser, S.; Schwickerath, U.; Sfiligoi, I.
2017-10-01
Within the HEPiX virtualization group and the Worldwide LHC Computing Grid’s Machine/Job Features Task Force, a mechanism has been developed which provides access to detailed information about the current host and the current job to the job itself. This allows user payloads to access meta information, independent of the current batch system or virtual machine model. The information can be accessed either locally via the filesystem on a worker node, or remotely via HTTP(S) from a webserver. This paper describes the final version of the specification from 2016 which was published as an HEP Software Foundation technical note, and the design of the implementations of this version for batch and virtual machine platforms. We discuss early experiences with these implementations and how they can be exploited by experiment frameworks.
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties
NASA Astrophysics Data System (ADS)
Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña
2008-08-01
A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.
NASA Astrophysics Data System (ADS)
Palittapongarnpim, Pantita; Sanders, Barry C.
2018-05-01
Quantum tomography infers quantum states from measurement data, but it becomes infeasible for large systems. Machine learning enables tomography of highly entangled many-body states and suggests a new powerful approach to this problem.
Paradigms for machine learning
NASA Technical Reports Server (NTRS)
Schlimmer, Jeffrey C.; Langley, Pat
1991-01-01
Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.
Code of Federal Regulations, 2010 CFR
2010-07-01
... department, agency, or instrumentality of the United States, in accordance with established procedures of... each property managing department, agency or instrumentality of the United States, subject to the..., agencies, or instrumentalities of the United States, under which blind vendors or State licensing agencies...
Zhang, Qingxue; Zhou, Dian; Zeng, Xuan
2016-11-01
This paper proposes a novel machine learning-enabled framework to robustly monitor the instantaneous heart rate (IHR) from wrist-electrocardiography (ECG) signals continuously and heavily corrupted by random motion artifacts in wearable applications. The framework includes two stages, i.e. heartbeat identification and refinement, respectively. In the first stage, an adaptive threshold-based auto-segmentation approach is proposed to select out heartbeat candidates, including the real heartbeats and large amounts of motion-artifact-induced interferential spikes. Then twenty-six features are extracted for each candidate in time, spatial, frequency and statistical domains, and evaluated by a spare support vector machine (SVM) to select out ten critical features which can effectively reveal residual heartbeat information. Afterwards, an SVM model, created on the training data using the selected feature set, is applied to find high confident heartbeats from a large number of candidates in the testing data. In the second stage, the SVM classification results are further refined by two steps: (1) a rule-based classifier with two attributes named 'continuity check' and 'locality check' for outlier (false positives) removal, and (2) a heartbeat interpolation strategy for missing-heartbeat (false negatives) recovery. The framework is evaluated on a wrist-ECG dataset acquired by a semi-customized platform and also a public dataset. When the signal-to-noise ratio is as low as -7 dB, the mean absolute error of the estimated IHR is 1.4 beats per minute (BPM) and the root mean square error is 6.5 BPM. The proposed framework greatly outperforms well-established approaches, demonstrating that it can effectively identify the heartbeats from ECG signals continuously corrupted by intense motion artifacts and robustly estimate the IHR. This study is expected to contribute to robust long-term wearable IHR monitoring for pervasive heart health and fitness management.
Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco
2016-06-01
M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Araki, Tadashi; Ikeda, Nobutaka; Shukla, Devarshi; Jain, Pankaj K; Londhe, Narendra D; Shrivastava, Vimal K; Banchhor, Sumit K; Saba, Luca; Nicolaides, Andrew; Shafique, Shoaib; Laird, John R; Suri, Jasjit S
2016-05-01
Percutaneous coronary interventional procedures need advance planning prior to stenting or an endarterectomy. Cardiologists use intravascular ultrasound (IVUS) for screening, risk assessment and stratification of coronary artery disease (CAD). We hypothesize that plaque components are vulnerable to rupture due to plaque progression. Currently, there are no standard grayscale IVUS tools for risk assessment of plaque rupture. This paper presents a novel strategy for risk stratification based on plaque morphology embedded with principal component analysis (PCA) for plaque feature dimensionality reduction and dominant feature selection technique. The risk assessment utilizes 56 grayscale coronary features in a machine learning framework while linking information from carotid and coronary plaque burdens due to their common genetic makeup. This system consists of a machine learning paradigm which uses a support vector machine (SVM) combined with PCA for optimal and dominant coronary artery morphological feature extraction. Carotid artery proven intima-media thickness (cIMT) biomarker is adapted as a gold standard during the training phase of the machine learning system. For the performance evaluation, K-fold cross validation protocol is adapted with 20 trials per fold. For choosing the dominant features out of the 56 grayscale features, a polling strategy of PCA is adapted where the original value of the features is unaltered. Different protocols are designed for establishing the stability and reliability criteria of the coronary risk assessment system (cRAS). Using the PCA-based machine learning paradigm and cross-validation protocol, a classification accuracy of 98.43% (AUC 0.98) with K=10 folds using an SVM radial basis function (RBF) kernel was achieved. A reliability index of 97.32% and machine learning stability criteria of 5% were met for the cRAS. This is the first Computer aided design (CADx) system of its kind that is able to demonstrate the ability of coronary risk assessment and stratification while demonstrating a successful design of the machine learning system based on our assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hancher, M.
2017-12-01
Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.
Application of Numerical Simulation for the Analysis of the Processes of Rotary Ultrasonic Drilling
NASA Astrophysics Data System (ADS)
Naď, Milan; Čičmancová, Lenka; Hajdu, Štefan
2016-12-01
Rotary ultrasonic machining (RUM) is a hybrid process that combines diamond grinding with ultrasonic machining. It is most suitable to machine hard brittle materials such as ceramics and composites. Due to its excellent machining performance, RUM is very often applied for drilling of hard machinable materials. In the final phase of drilling, the edge deterioration of the drilled hole can occur, which results in a phenomenon called edge chipping. During hole drilling, a change in the thickness of the bottom of the drilled hole occurs. Consequently, the bottom of the hole as a plate structure is exposed to the transfer through the resonance state. This resonance state can be considered as one of the important aspects leading to edge chipping. Effects of changes in the bottom thickness and as well as the fillet radius between the wall and bottom of the borehole on the stress-strain states during RUM are analyzed.
A framework to enhance security of physically unclonable functions using chaotic circuits
NASA Astrophysics Data System (ADS)
Chen, Lanxiang
2018-05-01
As a new technique for authentication and key generation, physically unclonable function (PUF) has attracted considerable attentions, with extensive research results achieved already. To resist the popular machine learning modeling attacks, a framework to enhance the security of PUFs is proposed. The basic idea is to combine PUFs with a chaotic system of which the response is highly sensitive to initial conditions. For this framework, a specific construction which combines the common arbiter PUF circuit, a converter, and the Chua's circuit is given to implement a more secure PUF. Simulation experiments are presented to further validate the framework. Finally, some practical suggestions for the framework and specific construction are also discussed.
34 CFR 395.8 - Distribution and use of income from vending machines on Federal property.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 2 2012-07-01 2012-07-01 false Distribution and use of income from vending machines on... use of income from vending machines on Federal property. (a) Vending machine income from vending machines on Federal property which has been disbursed to the State licensing agency by a property managing...
34 CFR 395.8 - Distribution and use of income from vending machines on Federal property.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 2 2013-07-01 2013-07-01 false Distribution and use of income from vending machines on... use of income from vending machines on Federal property. (a) Vending machine income from vending machines on Federal property which has been disbursed to the State licensing agency by a property managing...
34 CFR 395.8 - Distribution and use of income from vending machines on Federal property.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 2 2014-07-01 2013-07-01 true Distribution and use of income from vending machines on... use of income from vending machines on Federal property. (a) Vending machine income from vending machines on Federal property which has been disbursed to the State licensing agency by a property managing...
Technical Report on Occupations in Numerically Controlled Metal-Cutting Machining.
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC. U.S. Employment Service.
At the present time, only 5 percent of the short-run metal-cutting machining in the United States is done by numerically controlled machined tools, but within the next decade it is expected to increase by 50 percent. Numerically controlled machines use taped data which is changed into instructions and directs the machine to do certain steps…
NASA Astrophysics Data System (ADS)
Zander, T. O.; Jatzev, S.
2012-02-01
Brain-computer interface (BCI) systems are usually applied in highly controlled environments such as research laboratories or clinical setups. However, many BCI-based applications are implemented in more complex environments. For example, patients might want to use a BCI system at home, and users without disabilities could benefit from BCI systems in special working environments. In these contexts, it might be more difficult to reliably infer information about brain activity, because many intervening factors add up and disturb the BCI feature space. One solution for this problem would be adding context awareness to the system. We propose to augment the available information space with additional channels carrying information about the user state, the environment and the technical system. In particular, passive BCI systems seem to be capable of adding highly relevant context information—otherwise covert aspects of user state. In this paper, we present a theoretical framework based on general human-machine system research for adding context awareness to a BCI system. Building on that, we present results from a study on a passive BCI, which allows access to the covert aspect of user state related to the perceived loss of control. This study is a proof of concept and demonstrates that context awareness could beneficially be implemented in and combined with a BCI system or a general human-machine system. The EEG data from this experiment are available for public download at www.phypa.org. Parts of this work have already been presented in non-journal publications. This will be indicated specifically by appropriate references in the text.
Tool Integration Framework for Bio-Informatics
2007-04-01
Java NetBeans [11] based Integrated Development Environment (IDE) for developing modules and packaging computational tools. The framework is extremely...integrate an Eclipse front-end for Desktop Integration. Eclipse was chosen over Netbeans owing to a higher acceptance, better infrastructure...5.0. This version of Dashboard ran with NetBeans IDE 3.6 requiring Java Runtime 1.4 on a machine with Windows XP. The toolchain is executed by
Energetic particle instabilities in fusion plasmas
NASA Astrophysics Data System (ADS)
Sharapov, S. E.; Alper, B.; Berk, H. L.; Borba, D. N.; Breizman, B. N.; Challis, C. D.; Classen, I. G. J.; Edlund, E. M.; Eriksson, J.; Fasoli, A.; Fredrickson, E. D.; Fu, G. Y.; Garcia-Munoz, M.; Gassner, T.; Ghantous, K.; Goloborodko, V.; Gorelenkov, N. N.; Gryaznevich, M. P.; Hacquin, S.; Heidbrink, W. W.; Hellesen, C.; Kiptily, V. G.; Kramer, G. J.; Lauber, P.; Lilley, M. K.; Lisak, M.; Nabais, F.; Nazikian, R.; Nyqvist, R.; Osakabe, M.; Perez von Thun, C.; Pinches, S. D.; Podesta, M.; Porkolab, M.; Shinohara, K.; Schoepf, K.; Todo, Y.; Toi, K.; Van Zeeland, M. A.; Voitsekhovich, I.; White, R. B.; Yavorskij, V.; TG, ITPA EP; Contributors, JET-EFDA
2013-10-01
Remarkable progress has been made in diagnosing energetic particle instabilities on present-day machines and in establishing a theoretical framework for describing them. This overview describes the much improved diagnostics of Alfvén instabilities and modelling tools developed world-wide, and discusses progress in interpreting the observed phenomena. A multi-machine comparison is presented giving information on the performance of both diagnostics and modelling tools for different plasma conditions outlining expectations for ITER based on our present knowledge.
NASA Astrophysics Data System (ADS)
Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.
2013-01-01
Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.
NASA Astrophysics Data System (ADS)
Kaynak, Y.; Huang, B.; Karaca, H. E.; Jawahir, I. S.
2017-07-01
This experimental study focuses on the phase state and phase transformation response of the surface and subsurface of machined NiTi alloys. X-ray diffraction (XRD) analysis and differential scanning calorimeter techniques were utilized to measure the phase state and the transformation response of machined specimens, respectively. Specimens were machined under dry machining at ambient temperature, preheated conditions, and cryogenic cooling conditions at various cutting speeds. The findings from this research demonstrate that cryogenic machining substantially alters austenite finish temperature of martensitic NiTi alloy. Austenite finish ( A f) temperature shows more than 25 percent increase resulting from cryogenic machining compared with austenite finish temperature of as-received NiTi. Dry and preheated conditions do not substantially alter austenite finish temperature. XRD analysis shows that distinctive transformation from martensite to austenite occurs during machining process in all three conditions. Complete transformation from martensite to austenite is observed in dry cutting at all selected cutting speeds.
2010-01-01
Changes to the glycosylation profile on HIV gp120 can influence viral pathogenesis and alter AIDS disease progression. The characterization of glycosylation differences at the sequence level is inadequate as the placement of carbohydrates is structurally complex. However, no structural framework is available to date for the study of HIV disease progression. In this study, we propose a novel machine-learning based framework for the prediction of AIDS disease progression in three stages (RP, SP, and LTNP) using the HIV structural gp120 profile. This new intelligent framework proves to be accurate and provides an important benchmark for predicting AIDS disease progression computationally. The model is trained using a novel HIV gp120 glycosylation structural profile to detect possible stages of AIDS disease progression for the target sequences of HIV+ individuals. The performance of the proposed model was compared to seven existing different machine-learning models on newly proposed gp120-Benchmark_1 dataset in terms of error-rate (MSE), accuracy (CCI), stability (STD), and complexity (TBM). The novel framework showed better predictive performance with 67.82% CCI, 30.21 MSE, 0.8 STD, and 2.62 TBM on the three stages of AIDS disease progression of 50 HIV+ individuals. This framework is an invaluable bioinformatics tool that will be useful to the clinical assessment of viral pathogenesis. PMID:21143806
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
NASA Astrophysics Data System (ADS)
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, O.; Roa, Luis; Delgado, A.
We study the probabilistic cloning of equidistant states. These states are such that the inner product between them is a complex constant or its conjugate. Thereby, it is possible to study their cloning in a simple way. In particular, we are interested in the behavior of the cloning probability as a function of the phase of the overlap among the involved states. We show that for certain families of equidistant states Duan and Guo's cloning machine leads to cloning probabilities lower than the optimal unambiguous discrimination probability of equidistant states. We propose an alternative cloning machine whose cloning probability ismore » higher than or equal to the optimal unambiguous discrimination probability for any family of equidistant states. Both machines achieve the same probability for equidistant states whose inner product is a positive real number.« less
Crowdsourcing: A Primer and Its implications for Systems Engineering
2012-08-01
detailing areas to be improved within current crowdsourcing frameworks. Finally, an agent-based simulation using machine learning techniques is defined, preliminary results are presented, and future research directions are described.
Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.
Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian
2014-01-01
A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.
Chiral topological phases from artificial neural networks
NASA Astrophysics Data System (ADS)
Kaubruegger, Raphael; Pastori, Lorenzo; Budich, Jan Carl
2018-05-01
Motivated by recent progress in applying techniques from the field of artificial neural networks (ANNs) to quantum many-body physics, we investigate to what extent the flexibility of ANNs can be used to efficiently study systems that host chiral topological phases such as fractional quantum Hall (FQH) phases. With benchmark examples, we demonstrate that training ANNs of restricted Boltzmann machine type in the framework of variational Monte Carlo can numerically solve FQH problems to good approximation. Furthermore, we show by explicit construction how n -body correlations can be kept at an exact level with ANN wave functions exhibiting polynomial scaling with power n in system size. Using this construction, we analytically represent the paradigmatic Laughlin wave function as an ANN state.
A tri-fold hybrid classification approach for diagnostics with unexampled faulty states
NASA Astrophysics Data System (ADS)
Tamilselvan, Prasanna; Wang, Pingfeng
2015-01-01
System health diagnostics provides diversified benefits such as improved safety, improved reliability and reduced costs for the operation and maintenance of engineered systems. Successful health diagnostics requires the knowledge of system failures. However, with an increasing system complexity, it is extraordinarily difficult to have a well-tested system so that all potential faulty states can be realized and studied at product testing stage. Thus, real time health diagnostics requires automatic detection of unexampled system faulty states based upon sensory data to avoid sudden catastrophic system failures. This paper presents a trifold hybrid classification (THC) approach for structural health diagnosis with unexampled health states (UHS), which comprises of preliminary UHS identification using a new thresholded Mahalanobis distance (TMD) classifier, UHS diagnostics using a two-class support vector machine (SVM) classifier, and exampled health states diagnostics using a multi-class SVM classifier. The proposed THC approach, which takes the advantages of both TMD and SVM-based classification techniques, is able to identify and isolate the unexampled faulty states through interactively detecting the deviation of sensory data from the exampled health states and forming new ones autonomously. The proposed THC approach is further extended to a generic framework for health diagnostics problems with unexampled faulty states and demonstrated with health diagnostics case studies for power transformers and rolling bearings.
Machine learning in the string landscape
NASA Astrophysics Data System (ADS)
Carifio, Jonathan; Halverson, James; Krioukov, Dmitri; Nelson, Brent D.
2017-09-01
We utilize machine learning to study the string landscape. Deep data dives and conjecture generation are proposed as useful frameworks for utilizing machine learning in the landscape, and examples of each are presented. A decision tree accurately predicts the number of weak Fano toric threefolds arising from reflexive polytopes, each of which determines a smooth F-theory compactification, and linear regression generates a previously proven conjecture for the gauge group rank in an ensemble of 4/3× 2.96× {10}^{755} F-theory compactifications. Logistic regression generates a new conjecture for when E 6 arises in the large ensemble of F-theory compactifications, which is then rigorously proven. This result may be relevant for the appearance of visible sectors in the ensemble. Through conjecture generation, machine learning is useful not only for numerics, but also for rigorous results.
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Sequence invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, S.; Manjunath, S.
1990-01-01
A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.
Software framework for prognostic health monitoring of ocean-based power generation
NASA Astrophysics Data System (ADS)
Bowren, Mark
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development of prototypes for open-ocean power generation. Maintenance on ocean-based machinery can be very costly. To avoid unnecessary maintenance it is necessary to monitor the condition of each machine in order to predict problems. This kind of prognostic health monitoring (PHM) requires a condition-based maintenance (CBM) system that supports diagnostic and prognostic analysis of large amounts of data. Research in this field led to the creation of ISO13374 and the development of a standard open-architecture for machine condition monitoring. This thesis explores an implementation of such a system for ocean-based machinery using this framework and current open-standard technologies.
NASA Astrophysics Data System (ADS)
Galiatsatos, P. G.; Tennyson, J.
2012-11-01
The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.
Teaching an Old Log New Tricks with Machine Learning.
Schnell, Krista; Puri, Colin; Mahler, Paul; Dukatz, Carl
2014-03-01
To most people, the log file would not be considered an exciting area in technology today. However, these relatively benign, slowly growing data sources can drive large business transformations when combined with modern-day analytics. Accenture Technology Labs has built a new framework that helps to expand existing vendor solutions to create new methods of gaining insights from these benevolent information springs. This framework provides a systematic and effective machine-learning mechanism to understand, analyze, and visualize heterogeneous log files. These techniques enable an automated approach to analyzing log content in real time, learning relevant behaviors, and creating actionable insights applicable in traditionally reactive situations. Using this approach, companies can now tap into a wealth of knowledge residing in log file data that is currently being collected but underutilized because of its overwhelming variety and volume. By using log files as an important data input into the larger enterprise data supply chain, businesses have the opportunity to enhance their current operational log management solution and generate entirely new business insights-no longer limited to the realm of reactive IT management, but extending from proactive product improvement to defense from attacks. As we will discuss, this solution has immediate relevance in the telecommunications and security industries. However, the most forward-looking companies can take it even further. How? By thinking beyond the log file and applying the same machine-learning framework to other log file use cases (including logistics, social media, and consumer behavior) and any other transactional data source.
Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Kelder, Steven H; Chaloupka, Frank J
2015-07-27
Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students' in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school.
2015-01-01
Background Across the United States, many states have actively banned the sale of soda in high schools, and evidence suggests that students’ in-school access to soda has declined as a result. However, schools may be substituting soda with other sugar-sweetened beverages (SSBs), and national trends indicate that adolescents are consuming more sports drinks and energy drinks. This study examined whether students consumed more non-soda SSBs in states that banned the sale of soda in school. Methods Student data on consumption of various SSBs and in-school access to vending machines that sold SSBs were obtained from the National Youth Physical Activity and Nutrition Study (NYPANS), conducted in 2010. Student data were linked to state laws regarding the sale of soda in school in 2010. Students were cross-classified based on their access to vending machines and whether their state banned soda in school, creating 4 comparison groups. Zero-inflated negative binomial models were used to compare these 4 groups with respect to students’ self-reported consumption of diet soda, sports drinks, energy drinks, coffee/tea, or other SSBs. Students who had access to vending machines in a state that did not ban soda were the reference group. Models were adjusted for race/ethnicity, sex, grade, home food access, state median income, and U.S. Census region. Results Students consumed more servings of sports drinks, energy drinks, coffee/tea, and other SSBs if they resided in a state that banned soda in school but attended a school with vending machines that sold other SSBs. Similar results were observed where schools did not have vending machines but the state allowed soda to be sold in school. Intake was generally not elevated where both states and schools limited SSB availability – i.e., states banned soda and schools did not have SSB vending machines. Conclusion State laws that ban soda but allow other SSBs may lead students to substitute other non-soda SSBs. Additional longitudinal research is needed to confirm this. Elevated SSB intake was not observed when both states and schools took steps to remove SSBs from school. PMID:26221969
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Quantifying matrix product state
NASA Astrophysics Data System (ADS)
Bhatia, Amandeep Singh; Kumar, Ajay
2018-03-01
Motivated by the concept of quantum finite-state machines, we have investigated their relation with matrix product state of quantum spin systems. Matrix product states play a crucial role in the context of quantum information processing and are considered as a valuable asset for quantum information and communication purpose. It is an effective way to represent states of entangled systems. In this paper, we have designed quantum finite-state machines of one-dimensional matrix product state representations for quantum spin systems.
Alfadda, Sara A
2014-01-01
To use a novel approach to measure the amount of vertical marginal gap in computer numeric controlled (CNC)-milled titanium frameworks and conventional cast frameworks. Ten cast frameworks were fabricated on the mandibular master casts of 10 patients. Then, 10 CNC-milled titanium frameworks were fabricated by laser scanning the cast frameworks. The vertical marginal gap was measured and analyzed using the Contura-G2 coordinate measuring machine and special computer software. The CNC-milled titanium frameworks showed an overall reduced mean vertical gap compared with the cast frameworks in all five analogs. This difference was highly statistically significant in the distal analogs. The largest mean gap in the cast framework was recorded in the most distal analogs, and the least amount was in the middle analog. Neither of the two types of frameworks provided a completely gap-free superstructure. The CNCmilled titanium frameworks showed a significantly smaller vertical marginal gap than the cast frameworks.
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
Guo, Lei; Abbosh, Amin
2018-05-01
For any chance for stroke patients to survive, the stroke type should be classified to enable giving medication within a few hours of the onset of symptoms. In this paper, a microwave-based stroke localization and classification framework is proposed. It is based on microwave tomography, k-means clustering, and a support vector machine (SVM) method. The dielectric profile of the brain is first calculated using the Born iterative method, whereas the amplitude of the dielectric profile is then taken as the input to k-means clustering. The cluster is selected as the feature vector for constructing and testing the SVM. A database of MRI-derived realistic head phantoms at different signal-to-noise ratios is used in the classification procedure. The performance of the proposed framework is evaluated using the receiver operating characteristic (ROC) curve. The results based on a two-dimensional framework show that 88% classification accuracy, with a sensitivity of 91% and a specificity of 87%, can be achieved. Bioelectromagnetics. 39:312-324, 2018. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Amplifying human ability through autonomics and machine learning in IMPACT
NASA Astrophysics Data System (ADS)
Dzieciuch, Iryna; Reeder, John; Gutzwiller, Robert; Gustafson, Eric; Coronado, Braulio; Martinez, Luis; Croft, Bryan; Lange, Douglas S.
2017-05-01
Amplifying human ability for controlling complex environments featuring autonomous units can be aided by learned models of human and system performance. In developing a command and control system that allows a small number of people to control a large number of autonomous teams, we employ an autonomics framework to manage the networks that represent mission plans and the networks that are composed of human controllers and their autonomous assistants. Machine learning allows us to build models of human and system performance useful for monitoring plans and managing human attention and task loads. Machine learning also aids in the development of tactics that human supervisors can successfully monitor through the command and control system.
Workshop on Fielded Applications of Machine Learning
1994-05-11
This report summaries the talks presented at the Workshop on Fielded Applications of Machine Learning , and draws some initial conclusions about the state of machine learning and its potential for solving real-world problems.
Research on intelligent machine self-perception method based on LSTM
NASA Astrophysics Data System (ADS)
Wang, Qiang; Cheng, Tao
2018-05-01
In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification
NASA Astrophysics Data System (ADS)
Manseras, R.; Eugenio, F.; Palaoag, T.
2018-03-01
Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.
A Machine Learning Concept for DTN Routing
NASA Technical Reports Server (NTRS)
Dudukovich, Rachel; Hylton, Alan; Papachristou, Christos
2017-01-01
This paper discusses the concept and architecture of a machine learning based router for delay tolerant space networks. The techniques of reinforcement learning and Bayesian learning are used to supplement the routing decisions of the popular Contact Graph Routing algorithm. An introduction to the concepts of Contact Graph Routing, Q-routing and Naive Bayes classification are given. The development of an architecture for a cross-layer feedback framework for DTN (Delay-Tolerant Networking) protocols is discussed. Finally, initial simulation setup and results are given.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
Quantum reinforcement learning.
Dong, Daoyi; Chen, Chunlin; Li, Hanxiong; Tarn, Tzyh-Jong
2008-10-01
The key approaches for machine learning, particularly learning in unknown probabilistic environments, are new representations and computation mechanisms. In this paper, a novel quantum reinforcement learning (QRL) method is proposed by combining quantum theory and reinforcement learning (RL). Inspired by the state superposition principle and quantum parallelism, a framework of a value-updating algorithm is introduced. The state (action) in traditional RL is identified as the eigen state (eigen action) in QRL. The state (action) set can be represented with a quantum superposition state, and the eigen state (eigen action) can be obtained by randomly observing the simulated quantum state according to the collapse postulate of quantum measurement. The probability of the eigen action is determined by the probability amplitude, which is updated in parallel according to rewards. Some related characteristics of QRL such as convergence, optimality, and balancing between exploration and exploitation are also analyzed, which shows that this approach makes a good tradeoff between exploration and exploitation using the probability amplitude and can speedup learning through the quantum parallelism. To evaluate the performance and practicability of QRL, several simulated experiments are given, and the results demonstrate the effectiveness and superiority of the QRL algorithm for some complex problems. This paper is also an effective exploration on the application of quantum computation to artificial intelligence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olivares, Stefano
We investigate the performance of a selective cloning machine based on linear optical elements and Gaussian measurements, which allows one to clone at will one of the two incoming input states. This machine is a complete generalization of a 1{yields}2 cloning scheme demonstrated by Andersen et al. [Phys. Rev. Lett. 94, 240503 (2005)]. The input-output fidelity is studied for a generic Gaussian input state, and the effect of nonunit quantum efficiency is also taken into account. We show that, if the states to be cloned are squeezed states with known squeezing parameter, then the fidelity can be enhanced using amore » third suitable squeezed state during the final stage of the cloning process. A binary communication protocol based on the selective cloning machine is also discussed.« less
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-03-01
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Data-Driven Information Extraction from Chinese Electronic Medical Records
Zhao, Tianwan; Ge, Chen; Gao, Weiguo; Wei, Jia; Zhu, Kenny Q.
2015-01-01
Objective This study aims to propose a data-driven framework that takes unstructured free text narratives in Chinese Electronic Medical Records (EMRs) as input and converts them into structured time-event-description triples, where the description is either an elaboration or an outcome of the medical event. Materials and Methods Our framework uses a hybrid approach. It consists of constructing cross-domain core medical lexica, an unsupervised, iterative algorithm to accrue more accurate terms into the lexica, rules to address Chinese writing conventions and temporal descriptors, and a Support Vector Machine (SVM) algorithm that innovatively utilizes Normalized Google Distance (NGD) to estimate the correlation between medical events and their descriptions. Results The effectiveness of the framework was demonstrated with a dataset of 24,817 de-identified Chinese EMRs. The cross-domain medical lexica were capable of recognizing terms with an F1-score of 0.896. 98.5% of recorded medical events were linked to temporal descriptors. The NGD SVM description-event matching achieved an F1-score of 0.874. The end-to-end time-event-description extraction of our framework achieved an F1-score of 0.846. Discussion In terms of named entity recognition, the proposed framework outperforms state-of-the-art supervised learning algorithms (F1-score: 0.896 vs. 0.886). In event-description association, the NGD SVM is superior to SVM using only local context and semantic features (F1-score: 0.874 vs. 0.838). Conclusions The framework is data-driven, weakly supervised, and robust against the variations and noises that tend to occur in a large corpus. It addresses Chinese medical writing conventions and variations in writing styles through patterns used for discovering new terms and rules for updating the lexica. PMID:26295801
Data-Driven Information Extraction from Chinese Electronic Medical Records.
Xu, Dong; Zhang, Meizhuo; Zhao, Tianwan; Ge, Chen; Gao, Weiguo; Wei, Jia; Zhu, Kenny Q
2015-01-01
This study aims to propose a data-driven framework that takes unstructured free text narratives in Chinese Electronic Medical Records (EMRs) as input and converts them into structured time-event-description triples, where the description is either an elaboration or an outcome of the medical event. Our framework uses a hybrid approach. It consists of constructing cross-domain core medical lexica, an unsupervised, iterative algorithm to accrue more accurate terms into the lexica, rules to address Chinese writing conventions and temporal descriptors, and a Support Vector Machine (SVM) algorithm that innovatively utilizes Normalized Google Distance (NGD) to estimate the correlation between medical events and their descriptions. The effectiveness of the framework was demonstrated with a dataset of 24,817 de-identified Chinese EMRs. The cross-domain medical lexica were capable of recognizing terms with an F1-score of 0.896. 98.5% of recorded medical events were linked to temporal descriptors. The NGD SVM description-event matching achieved an F1-score of 0.874. The end-to-end time-event-description extraction of our framework achieved an F1-score of 0.846. In terms of named entity recognition, the proposed framework outperforms state-of-the-art supervised learning algorithms (F1-score: 0.896 vs. 0.886). In event-description association, the NGD SVM is superior to SVM using only local context and semantic features (F1-score: 0.874 vs. 0.838). The framework is data-driven, weakly supervised, and robust against the variations and noises that tend to occur in a large corpus. It addresses Chinese medical writing conventions and variations in writing styles through patterns used for discovering new terms and rules for updating the lexica.
Machine learning patterns for neuroimaging-genetic studies in the cloud.
Da Mota, Benoit; Tudoran, Radu; Costan, Alexandru; Varoquaux, Gaël; Brasche, Goetz; Conrod, Patricia; Lemaitre, Herve; Paus, Tomas; Rietschel, Marcella; Frouin, Vincent; Poline, Jean-Baptiste; Antoniu, Gabriel; Thirion, Bertrand
2014-01-01
Brain imaging is a natural intermediate phenotype to understand the link between genetic information and behavior or brain pathologies risk factors. Massive efforts have been made in the last few years to acquire high-dimensional neuroimaging and genetic data on large cohorts of subjects. The statistical analysis of such data is carried out with increasingly sophisticated techniques and represents a great computational challenge. Fortunately, increasing computational power in distributed architectures can be harnessed, if new neuroinformatics infrastructures are designed and training to use these new tools is provided. Combining a MapReduce framework (TomusBLOB) with machine learning algorithms (Scikit-learn library), we design a scalable analysis tool that can deal with non-parametric statistics on high-dimensional data. End-users describe the statistical procedure to perform and can then test the model on their own computers before running the very same code in the cloud at a larger scale. We illustrate the potential of our approach on real data with an experiment showing how the functional signal in subcortical brain regions can be significantly fit with genome-wide genotypes. This experiment demonstrates the scalability and the reliability of our framework in the cloud with a 2 weeks deployment on hundreds of virtual machines.
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Computerized Hammer Sounding Interpretation for Concrete Assessment with Online Machine Learning.
Ye, Jiaxing; Kobayashi, Takumi; Iwata, Masaya; Tsuda, Hiroshi; Murakawa, Masahiro
2018-03-09
Developing efficient Artificial Intelligence (AI)-enabled systems to substitute the human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel hammering response analysis system using online machine learning, which aims at achieving near-human performance in assessment of concrete structures. Current computerized hammer sounding systems commonly employ lab-scale data to validate the models. In practice, however, the response signal patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for response characterization. More specifically, the proposed system can adaptively update itself to approach human performance in hammering sounding data interpretation. To this end, a two-stage framework has been introduced, including feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 response instances from multiple inspection sites; each sample was annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable assessment accuracy with high efficiency and low computation load.
Semi-supervised and unsupervised extreme learning machines.
Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng
2014-12-01
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
ROOFN3D: Deep Learning Training Data for 3d Building Reconstruction
NASA Astrophysics Data System (ADS)
Wichmann, A.; Agoub, A.; Kada, M.
2018-05-01
Machine learning methods have gained in importance through the latest development of artificial intelligence and computer hardware. Particularly approaches based on deep learning have shown that they are able to provide state-of-the-art results for various tasks. However, the direct application of deep learning methods to improve the results of 3D building reconstruction is often not possible due, for example, to the lack of suitable training data. To address this issue, we present RoofN3D which provides a new 3D point cloud training dataset that can be used to train machine learning models for different tasks in the context of 3D building reconstruction. It can be used, among others, to train semantic segmentation networks or to learn the structure of buildings and the geometric model construction. Further details about RoofN3D and the developed data preparation framework, which enables the automatic derivation of training data, are described in this paper. Furthermore, we provide an overview of other available 3D point cloud training data and approaches from current literature in which solutions for the application of deep learning to unstructured and not gridded 3D point cloud data are presented.
Modeling the Car Crash Crisis Management System Using HiLA
NASA Astrophysics Data System (ADS)
Hölzl, Matthias; Knapp, Alexander; Zhang, Gefei
An aspect-oriented modeling approach to the Car Crash Crisis Management System (CCCMS) using the High-Level Aspect (HiLA) language is described. HiLA is a language for expressing aspects for UML static structures and UML state machines. In particular, HiLA supports both a static graph transformational and a dynamic approach of applying aspects. Furthermore, it facilitates methodologically turning use case descriptions into state machines: for each main success scenario, a base state machine is developed; all extensions to this main success scenario are covered by aspects. Overall, the static structure of the CCCMS is modeled in 43 classes, the main success scenarios in 13 base machines, the use case extensions in 47 static and 31 dynamic aspects, most of which are instantiations of simple aspect templates.
Augmentation of machine structure to improve its diagnosability
NASA Technical Reports Server (NTRS)
Hsieh, L.
1973-01-01
Two methods of augmenting the structure of a sequential machine so that it is diagnosable are presented. The checkable (checking sequences) and repeated symbol distinguishing sequences (RDS) are discussed. It was found that as few as twice the number of outputs of the given machine is sufficient for constructing a state-output augmentation with RDS. Techniques for minimizing the number of states in resolving convergences and in resolving equivalent and nonreduced cycles are developed.
A Framework for Classifying Online Mental Health-Related Communities With an Interest in Depression.
Saha, Budhaditya; Nguyen, Thin; Phung, Dinh; Venkatesh, Svetha
2016-07-01
Mental illness has a deep impact on individuals, families, and by extension, society as a whole. Social networks allow individuals with mental disorders to communicate with others sufferers via online communities, providing an invaluable resource for studies on textual signs of psychological health problems. Mental disorders often occur in combinations, e.g., a patient with an anxiety disorder may also develop depression. This co-occurring mental health condition provides the focus for our work on classifying online communities with an interest in depression. For this, we have crawled a large body of 620 000 posts made by 80 000 users in 247 online communities. We have extracted the topics and psycholinguistic features expressed in the posts, using these as inputs to our model. Following a machine learning technique, we have formulated a joint modeling framework in order to classify mental health-related co-occurring online communities from these features. Finally, we performed empirical validation of the model on the crawled dataset where our model outperforms recent state-of-the-art baselines.
REEF: Retainable Evaluator Execution Framework
Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia
2015-01-01
Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493
Robust feature extraction for rapid classification of damage in composites
NASA Astrophysics Data System (ADS)
Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi
2009-03-01
The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.
Visualization techniques for computer network defense
NASA Astrophysics Data System (ADS)
Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew
2011-06-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.
Managing virtual machines with Vac and Vcycle
NASA Astrophysics Data System (ADS)
McNab, A.; Love, P.; MacMahon, E.
2015-12-01
We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.
Calculating utilization rates for rubber tired grapple skidders in the Southern United States
Jason D. Thompson
2001-01-01
Utilization rate is an important factor in calculating machine rates for forest harvesting machines. Machine rates allow an evaluation of harvesting system costs and facilitate comparisons between different systems and machines. There are many factors that affect utilization rate. These include mechanical delays, non-mechanical delays, operational lost time, and...
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
NASA Astrophysics Data System (ADS)
Kislyakov, M. A.; Chernov, V. A.; Maksimkin, V. L.; Bozhin, Yu. M.
2017-12-01
The article deals with modern methods of monitoring the state and predicting the life of electric machines. In 50% of the cases of failure in the performance of electric machines is associated with insulation damage. As promising, nondestructive methods of control, methods based on the investigation of the processes of polarization occurring in insulating materials are proposed. To improve the accuracy of determining the state of insulation, a multiparametric approach is considered, which is a basis for the development of an expert system for estimating the state of health.
Control of discrete event systems modeled as hierarchical state machines
NASA Technical Reports Server (NTRS)
Brave, Y.; Heymann, M.
1991-01-01
The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.
Programmable Pulse-Position-Modulation Encoder
NASA Technical Reports Server (NTRS)
Zhu, David; Farr, William
2006-01-01
A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.
Luo, Gang
2017-12-01
For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Framework for Building Collaborative Research Environment
Devarakonda, Ranjeet; Palanisamy, Giriprakash; San Gil, Inigo
2014-10-25
Wide range of expertise and technologies are the key to solving some global problems. Semantic web technology can revolutionize the nature of how scientific knowledge is produced and shared. The semantic web is all about enabling machine-machine readability instead of a routine human-human interaction. Carefully structured data, as in machine readable data is the key to enabling these interactions. Drupal is an example of one such toolset that can render all the functionalities of Semantic Web technology right out of the box. Drupal’s content management system automatically stores the data in a structured format enabling it to be machine. Withinmore » this paper, we will discuss how Drupal promotes collaboration in a research setting such as Oak Ridge National Laboratory (ORNL) and Long Term Ecological Research Center (LTER) and how it is effectively using the Semantic Web in achieving this.« less
Luo, Gang
2017-01-01
For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022
A Machine LearningFramework to Forecast Wave Conditions
NASA Astrophysics Data System (ADS)
Zhang, Y.; James, S. C.; O'Donncha, F.
2017-12-01
Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.
High speed operation of permanent magnet machines
NASA Astrophysics Data System (ADS)
El-Refaie, Ayman M.
This work proposes methods to extend the high-speed operating capabilities of both the interior PM (IPM) and surface PM (SPM) machines. For interior PM machines, this research has developed and presented the first thorough analysis of how a new bi-state magnetic material can be usefully applied to the design of IPM machines. Key elements of this contribution include identifying how the unique properties of the bi-state magnetic material can be applied most effectively in the rotor design of an IPM machine by "unmagnetizing" the magnet cavity center posts rather than the outer bridges. The importance of elevated rotor speed in making the best use of the bi-state magnetic material while recognizing its limitations has been identified. For surface PM machines, this research has provided, for the first time, a clear explanation of how fractional-slot concentrated windings can be applied to SPM machines in order to achieve the necessary conditions for optimal flux weakening. A closed-form analytical procedure for analyzing SPM machines designed with concentrated windings has been developed. Guidelines for designing SPM machines using concentrated windings in order to achieve optimum flux weakening are provided. Analytical and numerical finite element analysis (FEA) results have provided promising evidence of the scalability of the concentrated winding technique with respect to the number of poles, machine aspect ratio, and output power rating. Useful comparisons between the predicted performance characteristics of SPM machines equipped with concentrated windings and both SPM and IPM machines designed with distributed windings are included. Analytical techniques have been used to evaluate the impact of the high pole number on various converter performance metrics. Both analytical techniques and FEA have been used for evaluating the eddy-current losses in the surface magnets due to the stator winding subharmonics. Techniques for reducing these losses have been investigated. A 6kW, 36slot/30pole prototype SPM machine has been designed and built. Experimental measurements have been used to verify the analytical and FEA results. These test results have demonstrated that wide constant-power speed range can be achieved. Other important machine features such as the near-sinusoidal back-emf, high efficiency, and low cogging torque have also been demonstrated.
Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro
2016-01-01
The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.
Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro
2016-01-01
The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952
State-of-the-art Anonymization of Medical Records Using an Iterative Machine Learning Framework
Szarvas, György; Farkas, Richárd; Busa-Fekete, Róbert
2007-01-01
Objective The anonymization of medical records is of great importance in the human life sciences because a de-identified text can be made publicly available for non-hospital researchers as well, to facilitate research on human diseases. Here the authors have developed a de-identification model that can successfully remove personal health information (PHI) from discharge records to make them conform to the guidelines of the Health Information Portability and Accountability Act. Design We introduce here a novel, machine learning-based iterative Named Entity Recognition approach intended for use on semi-structured documents like discharge records. Our method identifies PHI in several steps. First, it labels all entities whose tags can be inferred from the structure of the text and it then utilizes this information to find further PHI phrases in the flow text parts of the document. Measurements Following the standard evaluation method of the first Workshop on Challenges in Natural Language Processing for Clinical Data, we used token-level Precision, Recall and Fβ=1 measure metrics for evaluation. Results Our system achieved outstanding accuracy on the standard evaluation dataset of the de-identification challenge, with an F measure of 99.7534% for the best submitted model. Conclusion We can say that our system is competitive with the current state-of-the-art solutions, while we describe here several techniques that can be beneficial in other tasks that need to handle structured documents such as clinical records. PMID:17823086
Bridge Health Monitoring Using a Machine Learning Strategy
DOT National Transportation Integrated Search
2017-01-01
The goal of this project was to cast the SHM problem within a statistical pattern recognition framework. Techniques borrowed from speaker recognition, particularly speaker verification, were used as this discipline deals with problems very similar to...
Polyphony: A Workflow Orchestration Framework for Cloud Computing
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom
2010-01-01
Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.
Kostanyan, Artak E; Shishilov, Oleg N
2018-06-01
Multiple dual mode counter-current chromatography (MDM CCC) separation processes with semi-continuous large sample loading consist of a succession of two counter-current steps: with "x" phase (first step) and "y" phase (second step) flow periods. A feed mixture dissolved in the "x" phase is continuously loaded into a CCC machine at the beginning of the first step of each cycle over a constant time with the volumetric rate equal to the flow rate of the pure "x" phase. An easy-to-use calculating machine is developed to simulate the chromatograms and the amounts of solutes eluted with the phases at each cycle for steady-state (the duration of the flow periods of the phases is kept constant for all the cycles) and non-steady-state (with variable duration of alternating phase elution steps) separations. Using the calculating machine, the separation of mixtures containing up to five components can be simulated and designed. Examples of the application of the calculating machine for the simulation of MDM CCC processes are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
Ahmed, Yassmin Seid; Fox-Rabinovich, German; Paiva, Jose Mario; Wagg, Terry; Veldhuis, Stephen Clarence
2017-10-25
During machining of stainless steels at low cutting -speeds, workpiece material tends to adhere to the cutting tool at the tool-chip interface, forming built-up edge (BUE). BUE has a great importance in machining processes; it can significantly modify the phenomenon in the cutting zone, directly affecting the workpiece surface integrity, cutting tool forces, and chip formation. The American Iron and Steel Institute (AISI) 304 stainless steel has a high tendency to form an unstable BUE, leading to deterioration of the surface quality. Therefore, it is necessary to understand the nature of the surface integrity induced during machining operations. Although many reports have been published on the effect of tool wear during machining of AISI 304 stainless steel on surface integrity, studies on the influence of the BUE phenomenon in the stable state of wear have not been investigated so far. The main goal of the present work is to investigate the close link between the BUE formation, surface integrity and cutting forces in the stable sate of wear for uncoated cutting tool during the cutting tests of AISI 304 stainless steel. The cutting parameters were chosen to induce BUE formation during machining. X-ray diffraction (XRD) method was used for measuring superficial residual stresses of the machined surface through the stable state of wear in the cutting and feed directions. In addition, surface roughness of the machined surface was investigated using the Alicona microscope and Scanning Electron Microscopy (SEM) was used to reveal the surface distortions created during the cutting process, combined with chip undersurface analyses. The investigated BUE formation during the stable state of wear showed that the BUE can cause a significant improvement in the surface integrity and cutting forces. Moreover, it can be used to compensate for tool wear through changing the tool geometry, leading to the protection of the cutting tool from wear.
Fox-Rabinovich, German; Wagg, Terry
2017-01-01
During machining of stainless steels at low cutting -speeds, workpiece material tends to adhere to the cutting tool at the tool–chip interface, forming built-up edge (BUE). BUE has a great importance in machining processes; it can significantly modify the phenomenon in the cutting zone, directly affecting the workpiece surface integrity, cutting tool forces, and chip formation. The American Iron and Steel Institute (AISI) 304 stainless steel has a high tendency to form an unstable BUE, leading to deterioration of the surface quality. Therefore, it is necessary to understand the nature of the surface integrity induced during machining operations. Although many reports have been published on the effect of tool wear during machining of AISI 304 stainless steel on surface integrity, studies on the influence of the BUE phenomenon in the stable state of wear have not been investigated so far. The main goal of the present work is to investigate the close link between the BUE formation, surface integrity and cutting forces in the stable sate of wear for uncoated cutting tool during the cutting tests of AISI 304 stainless steel. The cutting parameters were chosen to induce BUE formation during machining. X-ray diffraction (XRD) method was used for measuring superficial residual stresses of the machined surface through the stable state of wear in the cutting and feed directions. In addition, surface roughness of the machined surface was investigated using the Alicona microscope and Scanning Electron Microscopy (SEM) was used to reveal the surface distortions created during the cutting process, combined with chip undersurface analyses. The investigated BUE formation during the stable state of wear showed that the BUE can cause a significant improvement in the surface integrity and cutting forces. Moreover, it can be used to compensate for tool wear through changing the tool geometry, leading to the protection of the cutting tool from wear. PMID:29068405
Modeling repetitive motions using structured light.
Xu, Yi; Aliaga, Daniel G
2010-01-01
Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.
High-throughput state-machine replication using software transactional memory.
Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin
2016-11-01
State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.
High-throughput state-machine replication using software transactional memory
Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin
2017-01-01
State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
... Halogenated Solvent Cleaning Machines: State of Rhode Island Department of Environmental Management AGENCY... machines in Rhode Island, except for continuous web cleaning machines. This approval would grant RI DEM the... Halogenated Solvent NESHAP for organic solvent cleaning machines and would make the Rhode Island Department of...
Technology of machine tools. Volume 4. Machine tool controls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-10-01
The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.
Technology of machine tools. Volume 3. Machine tool mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tlusty, J.
1980-10-01
The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.
Technology of machine tools. Volume 5. Machine tool accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hocken, R.J.
1980-10-01
The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.
Cybersemiotics: a transdisciplinary framework for information studies.
Brier, S
1998-04-01
This paper summarizes recent attempts by this author to create a transdisciplinary, non-Cartesian and non-reductionistic framework for information studies in natural, social, and technological systems. To confront, in a scientific way, the problems of modern information technology where phenomenological man is dealing with socially constructed texts in algorithmically based digital bit-machines we need a theoretical framework spanning from physics over biology and technological design to phenomenological and social production of signification and meaning. I am working with such pragmatic theories as second order cybernetics (coupled with autopolesis theory), Lakoffs biologically oriented cognitive semantics, Peirce's triadic semiotics, and Wittgenstein's pragmatic language game theory. A coherent synthesis of these theories is what the cybersemiotic framework attempts to accomplish.
Machine learning phases of matter
NASA Astrophysics Data System (ADS)
Carrasquilla, Juan; Melko, Roger G.
2017-02-01
Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the `curse of dimensionality' commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo.
Astumian, R. Dean
2015-01-01
A simple model for a chemically driven molecular walker shows that the elastic energy stored by the molecule and released during the conformational change known as the power-stroke (i.e., the free-energy difference between the pre- and post-power-stroke states) is irrelevant for determining the directionality, stopping force, and efficiency of the motor. Further, the apportionment of the dependence on the externally applied force between the forward and reverse rate constants of the power-stroke (or indeed among all rate constants) is irrelevant for determining the directionality, stopping force, and efficiency of the motor. Arguments based on the principle of microscopic reversibility demonstrate that this result is general for all chemically driven molecular machines, and even more broadly that the relative energies of the states of the motor have no role in determining the directionality, stopping force, or optimal efficiency of the machine. Instead, the directionality, stopping force, and optimal efficiency are determined solely by the relative heights of the energy barriers between the states. Molecular recognition—the ability of a molecular machine to discriminate between substrate and product depending on the state of the machine—is far more important for determining the intrinsic directionality and thermodynamics of chemo-mechanical coupling than are the details of the internal mechanical conformational motions of the machine. In contrast to the conclusions for chemical driving, a power-stroke is very important for the directionality and efficiency of light-driven molecular machines and for molecular machines driven by external modulation of thermodynamic parameters. PMID:25606678
A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.
Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen
2014-01-01
Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Center for Plasma Edge Simulation (CPES) -- Rutgers University Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
2014-03-06
The CPES scientific simulations run at scale on leadership class machines, collaborate at runtime and produce and exchange large data sizes, which present multiple I/O and data management challenges. During the CPES project, the Rutgers team worked with the rest of the CPES team to address these challenges at different levels, and specifically (1) at the data transport and communication level through the DART (Decoupled and Asynchronous Remote Data Transfers) framework, and (2) at the data management and services level through the DataSpaces and ActiveSpaces frameworks. These frameworks and their impact are briefly described.
Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java
NASA Astrophysics Data System (ADS)
O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David
2011-10-01
This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.
Identification of Tool Wear when Machining of Austenitic Steels and Titatium by Miniature Machining
NASA Astrophysics Data System (ADS)
Pilc, Jozef; Kameník, Roman; Varga, Daniel; Martinček, Juraj; Sadilek, Marek
2016-12-01
Application of miniature machining is currently rapidly increasing mainly in biomedical industry and machining of hard-to-machine materials. Machinability of materials with increased level of toughness depends on factors that are important in the final state of surface integrity. Because of this, it is necessary to achieve high precision (varying in microns) in miniature machining. If we want to guarantee machining high precision, it is necessary to analyse tool wear intensity in direct interaction with given machined materials. During long-term cutting process, different cutting wedge deformations occur, leading in most cases to a rapid wear and destruction of the cutting wedge. This article deal with experimental monitoring of tool wear intensity during miniature machining.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
Distributed state machine supervision for long-baseline gravitational-wave detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rollins, Jameson Graef, E-mail: jameson.rollins@ligo.org
The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitatemore » the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.« less
Biomedical Informatics on the Cloud: A Treasure Hunt for Advancing Cardiovascular Medicine.
Ping, Peipei; Hermjakob, Henning; Polson, Jennifer S; Benos, Panagiotis V; Wang, Wei
2018-04-27
In the digital age of cardiovascular medicine, the rate of biomedical discovery can be greatly accelerated by the guidance and resources required to unearth potential collections of knowledge. A unified computational platform leverages metadata to not only provide direction but also empower researchers to mine a wealth of biomedical information and forge novel mechanistic insights. This review takes the opportunity to present an overview of the cloud-based computational environment, including the functional roles of metadata, the architecture schema of indexing and search, and the practical scenarios of machine learning-supported molecular signature extraction. By introducing several established resources and state-of-the-art workflows, we share with our readers a broadly defined informatics framework to phenotype cardiovascular health and disease. © 2018 American Heart Association, Inc.
Elastic Multi-scale Mechanisms: Computation and Biological Evolution.
Diaz Ochoa, Juan G
2018-01-01
Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-01-01
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786
Game-powered machine learning.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-04-24
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.
Titanium in dentistry: historical development, state of the art and future perspectives.
Jorge, Juliana Ribeiro Pala; Barão, Valentim Adelino; Delben, Juliana Aparecida; Faverani, Leonardo Perez; Queiroz, Thallita Pereira; Assunção, Wirley Gonçalves
2013-06-01
Titanium is a metallic element known by several attractive characteristics, such as biocompatibility, excellent corrosion resistance and high mechanical resistance. It is widely used in Dentistry, with high success rates, providing a favorable biological response when in contact with live tissues. Therefore, the objective of this study was to describe the different uses of titanium in Dentistry, reviewing its historical development and discoursing about its state of art and future perspective of its utilization. A search in the MEDLINE/PubMed database was performed using the terms 'titanium', 'dentistry' and 'implants'. The title and abstract of articles were read, and after this first screening 20 articles were selected and their full-texts were downloaded. Additional text books and manual search of reference lists within selected articles were included. Correlated literature showed that titanium is the most used metal in Implantology for manufacturing osseointegrated implants and their systems, with a totally consolidated utilization. Moreover, titanium can be also employed in prosthodontics to obtain frameworks. However, problems related to its machining, casting, welding and ceramic application for dental prosthesis are still limiting its use. In Endodontics, titanium has been used in association to nickel for manufacturing rotatory instruments, providing a higher resistance to deformation. However, although the different possibilities of using titanium in modern Dentistry, its use for prostheses frameworks still needs technological improvements in order to surpass its limitations.
On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs
NASA Technical Reports Server (NTRS)
Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
Optimum and Heuristic Algorithms for Finite State Machine Decomposition and Partitioning
1989-09-01
Heuristic Algorithms for Finite State Machine Decomposition and Partitioning Pravnav Ashar, Srinivas Devadas , and A. Richard Newton , T E’,’ .,jpf~s’!i3...94720. Devadas : Department of Electrical Engineering and Computer Science, Room 36-848, MIT, Cambridge, MA 02139. (617) 253-0454. Copyright* 1989 MIT...and reduction, A finite state miachinie is represenutedl by its State Transition Graphi itodlitied froini two-level B ~oolean imiinimizers. Ilist
Nanowire nanocomputer as a finite-state machine.
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F; Ellenbogen, James C; Lieber, Charles M
2014-02-18
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom-up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.
Nanowire nanocomputer as a finite-state machine
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F.; Ellenbogen, James C.; Lieber, Charles M.
2014-01-01
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future. PMID:24469812
Technology of machine tools. Volume 2. Machine tool systems management and utilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomson, A.R.
1980-10-01
The Machine Tool Task Force (MTTF) was formed to characterize the state of the art of machine tool technology and to identify promising future directions of this technology. This volume is one of a five-volume series that presents the MTTF findings; reports on various areas of the technology were contributed by experts in those areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... vending facilities, including vending machines, on property controlled by the Department of the Treasury... States. Treasury bureaus shall ensure that the collection and distribution of vending machine income from vending machines on Treasury-controlled property shall be in compliance with the regulations set forth in...
Cooperative analysis expert situation assessment research
NASA Technical Reports Server (NTRS)
Mccown, Michael G.
1987-01-01
For the past few decades, Rome Air Development Center (RADC) has been conducting research in Artificial Intelligence (AI). When the recent advances in hardware technology made many AI techniques practical, the Intelligence and Reconnaissance Directorate of RADC initiated an applications program entitled Knowledge Based Intelligence Systems (KBIS). The goal of the program is the development of a generic Intelligent Analyst System, an open machine with the framework for intelligence analysis, natural language processing, and man-machine interface techniques, needing only the specific problem domain knowledge to be operationally useful. The development of KBIS is described.
Optimized extreme learning machine for urban land cover classification using hyperspectral imagery
NASA Astrophysics Data System (ADS)
Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam
2017-12-01
This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.
Computational Foundations of Natural Intelligence
van Gerven, Marcel
2017-01-01
New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence. PMID:29375355
Computable visually observed phenotype ontological framework for plants
2011-01-01
Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966
An Adaptive Genetic Association Test Using Double Kernel Machines.
Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis
2015-10-01
Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.
Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆
Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh
2011-01-01
Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969
Taber, Daniel R; Chriqui, Jamie F; Vuillaume, Renee; Chaloupka, Frank J
2014-01-01
Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.). Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference = -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption.
Taber, Daniel R.; Chriqui, Jamie F.; Vuillaume, Renee; Chaloupka, Frank J.
2014-01-01
Background Sodas are widely sold in vending machines and other school venues in the United States, particularly in high school. Research suggests that policy changes have reduced soda access, but the impact of reduced access on consumption is unclear. This study was designed to identify student, environmental, or policy characteristics that modify the associations between school vending machines and student dietary behaviors. Methods Data on school vending machine access and student diet were obtained as part of the National Youth Physical Activity and Nutrition Study (NYPANS) and linked to state-level data on soda taxes, restaurant taxes, and state laws governing the sale of soda in schools. Regression models were used to: 1) estimate associations between vending machine access and soda consumption, fast food consumption, and lunch source, and 2) determine if associations were modified by state soda taxes, restaurant taxes, laws banning in-school soda sales, or student characteristics (race/ethnicity, sex, home food access, weight loss behaviors.) Results Contrary to the hypothesis, students tended to consume 0.53 fewer servings of soda/week (95% CI: -1.17, 0.11) and consume fast food on 0.24 fewer days/week (95% CI: -0.44, -0.05) if they had in-school access to vending machines. They were also less likely to consume soda daily (23.9% vs. 27.9%, average difference = -4.02, 95% CI: -7.28, -0.76). However, these inverse associations were observed primarily among states with lower soda and restaurant tax rates (relative to general food tax rates) and states that did not ban in-school soda sales. Associations did not vary by any student characteristics except for weight loss behaviors. Conclusion Isolated changes to the school food environment may have unintended consequences unless policymakers incorporate other initiatives designed to discourage overall soda consumption. PMID:25083906
Software platform for managing the classification of error- related potentials of observers
NASA Astrophysics Data System (ADS)
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
Virtual screening of inorganic materials synthesis parameters with deep learning
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Jegelka, Stefanie; Olivetti, Elsa
2017-12-01
Virtual materials screening approaches have proliferated in the past decade, driven by rapid advances in first-principles computational techniques, and machine-learning algorithms. By comparison, computationally driven materials synthesis screening is still in its infancy, and is mired by the challenges of data sparsity and data scarcity: Synthesis routes exist in a sparse, high-dimensional parameter space that is difficult to optimize over directly, and, for some materials of interest, only scarce volumes of literature-reported syntheses are available. In this article, we present a framework for suggesting quantitative synthesis parameters and potential driving factors for synthesis outcomes. We use a variational autoencoder to compress sparse synthesis representations into a lower dimensional space, which is found to improve the performance of machine-learning tasks. To realize this screening framework even in cases where there are few literature data, we devise a novel data augmentation methodology that incorporates literature synthesis data from related materials systems. We apply this variational autoencoder framework to generate potential SrTiO3 synthesis parameter sets, propose driving factors for brookite TiO2 formation, and identify correlations between alkali-ion intercalation and MnO2 polymorph selection.
Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades
2013-01-01
Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.
Gambling with stimulus payments: feeding gaming machines with federal dollars.
Lye, Jenny; Hirschberg, Joe
2014-09-01
In late 2008 and early 2009 the Australian Federal Government introduced a series of economic stimulus packages designed to maintain consumer spending in the early days of the Great Recession. When these packages were initiated the media suggested that the wide-spread availability of electronic gaming machines (EGMs, e.g. slot machines, poker machines, video lottery terminals) in Australia would result in stimulating the EGMs. Using state level monthly data we estimate that the stimulus packages led to an increase of 26 % in EGM revenues. This also resulted in over $60 million in additional tax revenue for State Governments. We also estimate a short-run aggregate income demand elasticity for EGMs to be approximately 2.
Energy: Machines, Science (Experimental): 5311.03.
ERIC Educational Resources Information Center
Castaldi, June P.
This unit of instruction was designed as an introductory course in energy involving six simple machines, electricity, magnetism, and motion. The booklet lists the relevant state-adopted texts and states the performance objectives for the unit. It provides an outline of the course content and suggests experiments, demonstrations, field trips, and…
22 CFR 121.10 - Forgings, castings and machined bodies.
Code of Federal Regulations, 2013 CFR
2013-04-01
... STATES MUNITIONS LIST Enumeration of Articles § 121.10 Forgings, castings and machined bodies. Articles on the U.S. Munitions List include articles in a partially completed state (such as forgings... identifiable as defense articles. If the end-item is an article on the U.S. Munitions List (including...
22 CFR 121.10 - Forgings, castings and machined bodies.
Code of Federal Regulations, 2011 CFR
2011-04-01
... STATES MUNITIONS LIST Enumeration of Articles § 121.10 Forgings, castings and machined bodies. Articles on the U.S. Munitions List include articles in a partially completed state (such as forgings... identifiable as defense articles. If the end-item is an article on the U.S. Munitions List (including...
22 CFR 121.10 - Forgings, castings and machined bodies.
Code of Federal Regulations, 2012 CFR
2012-04-01
... STATES MUNITIONS LIST Enumeration of Articles § 121.10 Forgings, castings and machined bodies. Articles on the U.S. Munitions List include articles in a partially completed state (such as forgings... identifiable as defense articles. If the end-item is an article on the U.S. Munitions List (including...
BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing
NASA Astrophysics Data System (ADS)
Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie
Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.
Cast iron cutting with nano TiN and multilayer TiN-CrN coated inserts
NASA Astrophysics Data System (ADS)
Perucca, M.; Durante, S.; Semmler, U.; Rüger, C.; Fuentes, G. G.; Almandoz, E.
2012-09-01
During the past decade great success has been achieved in the development of duplex and multilayer multi-functional surface systems. Among these surface systems outstanding properties have nanoscale multilayer coatings. Within the framework of the M3-2S project funded in the 7th European Framework Programme, several nanoscale multilayer coatings have been developed and investigated for experimental and industrial validation. This paper shows the performance of TiN and TiN/CrN nanoscale multilayer coatings on WC cutting inserts when machining GJL250 cast iron. The thin films have been deposited by cathodic arc evaporation in an industrial PVD system. The multilayer deposition characteristic and its properties are shown. The inserts have been investigated in systematic cutting experiments of cast iron bars on a turning machine specifically equipped for force measurements, accompanied by wear determination. Furthermore, equivalent experiments have been carried out on an industrial turning unit. Industrial validation criteria have been applied to assess the comparative performance of the coatings. The choice of the material and the machined parts is driven by an interest in automotive applications. The industrial tests show the need to further optimise the multi-scale modelling approach in order to reduce the lead time of the coating development as well as to improve simulation reliability.
Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations
NASA Astrophysics Data System (ADS)
Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian
2018-04-01
Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.
High-speed machining of Space Shuttle External Tank (ET) panels
NASA Technical Reports Server (NTRS)
Miller, J. A.
1983-01-01
Potential production rates and project cost savings achieved by converting the conventional machining process in manufacturing shuttle external tank panels to high speed machining (HSM) techniques were studied. Savings were projected from the comparison of current production rates with HSM rates and with rates attainable on new conventional machines. The HSM estimates were also based on rates attainable by retrofitting existing conventional equipment with high speed spindle motors and rates attainable using new state of the art machines designed and built for HSM.
Overview of the Machine-Tool Task Force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutton, G.P.
1981-06-08
The Machine Tool Task Force, (MTTF) surveyed the state of the art of machine tool technology for material removal for two and one-half years. This overview gives a brief summary of the approach, specific subjects covered, principal conclusions and some of the key recommendations aimed at improving the technology and advancing the productivity of machine tools. The Task Force consisted of 123 experts from the US and other countries. Their findings are documented in a five-volume report, Technology of Machine Tools.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-07-01
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
VML 3.0 Reactive Sequencing Objects and Matrix Math Operations for Attitude Profiling
NASA Technical Reports Server (NTRS)
Grasso, Christopher A.; Riedel, Joseph E.
2012-01-01
VML (Virtual Machine Language) has been used as the sequencing flight software on over a dozen JPL deep-space missions, most recently flying on GRAIL and JUNO. In conjunction with the NASA SBIR entitled "Reactive Rendezvous and Docking Sequencer", VML version 3.0 has been enhanced to include object-oriented element organization, built-in queuing operations, and sophisticated matrix / vector operations. These improvements allow VML scripts to easily perform much of the work that formerly would have required a great deal of expensive flight software development to realize. Autonomous turning and tracking makes considerable use of new VML features. Profiles generated by flight software are managed using object-oriented VML data constructs executed in discrete time by the VML flight software. VML vector and matrix operations provide the ability to calculate and supply quaternions to the attitude controller flight software which produces torque requests. Using VML-based attitude planning components eliminates flight software development effort, and reduces corresponding costs. In addition, the direct management of the quaternions allows turning and tracking to be tied in with sophisticated high-level VML state machines. These state machines provide autonomous management of spacecraft operations during critical tasks like a hypothetic Mars sample return rendezvous and docking. State machines created for autonomous science observations can also use this sort of attitude planning system, allowing heightened autonomy levels to reduce operations costs. VML state machines cannot be considered merely sequences - they are reactive logic constructs capable of autonomous decision making within a well-defined domain. The state machine approach enabled by VML 3.0 is progressing toward flight capability with a wide array of applicable mission activities.
exprso: an R-package for the rapid implementation of machine learning algorithms.
Quinn, Thomas; Tylee, Daniel; Glatt, Stephen
2016-01-01
Machine learning plays a major role in many scientific investigations. However, non-expert programmers may struggle to implement the elaborate pipelines necessary to build highly accurate and generalizable models. We introduce exprso , a new R package that is an intuitive machine learning suite designed specifically for non-expert programmers. Built initially for the classification of high-dimensional data, exprso uses an object-oriented framework to encapsulate a number of common analytical methods into a series of interchangeable modules. This includes modules for feature selection, classification, high-throughput parameter grid-searching, elaborate cross-validation schemes (e.g., Monte Carlo and nested cross-validation), ensemble classification, and prediction. In addition, exprso also supports multi-class classification (through the 1-vs-all generalization of binary classifiers) and the prediction of continuous outcomes.
An Oracle-based co-training framework for writer identification in offline handwriting
NASA Astrophysics Data System (ADS)
Porwal, Utkarsh; Rajan, Sreeranga; Govindaraju, Venu
2012-01-01
State-of-the-art techniques for writer identification have been centered primarily on enhancing the performance of the system for writer identification. Machine learning algorithms have been used extensively to improve the accuracy of such system assuming sufficient amount of data is available for training. Little attention has been paid to the prospect of harnessing the information tapped in a large amount of un-annotated data. This paper focuses on co-training based framework that can be used for iterative labeling of the unlabeled data set exploiting the independence between the multiple views (features) of the data. This paradigm relaxes the assumption of sufficiency of the data available and tries to generate labeled data from unlabeled data set along with improving the accuracy of the system. However, performance of co-training based framework is dependent on the effectiveness of the algorithm used for the selection of data points to be added in the labeled set. We propose an Oracle based approach for data selection that learns the patterns in the score distribution of classes for labeled data points and then predicts the labels (writers) of the unlabeled data point. This method for selection statistically learns the class distribution and predicts the most probable class unlike traditional selection algorithms which were based on heuristic approaches. We conducted experiments on publicly available IAM dataset and illustrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Robert-Perron, Etienne; Blais, Carl; Pelletier, Sylvain; Thomas, Yannig
2007-06-01
The green machining process is an interesting approach for solving the mediocre machining behavior of high-performance powder metallurgy (PM) steels. This process appears as a promising method for extending tool life and reducing machining costs. Recent improvements in binder/lubricant technologies have led to high green strength systems that enable green machining. So far, tool wear has been considered negligible when characterizing the machinability of green PM specimens. This inaccurate assumption may lead to the selection of suboptimum cutting conditions. The first part of this study involves the optimization of the machining parameters to minimize the effects of tool wear on the machinability in turning of green PM components. The second part of our work compares the sintered mechanical properties of components machined in green state with other machined after sintering.
Implementing finite state machines in a computer-based teaching system
NASA Astrophysics Data System (ADS)
Hacker, Charles H.; Sitte, Renate
1999-09-01
Finite State Machines (FSM) are models for functions commonly implemented in digital circuits such as timers, remote controls, and vending machines. Teaching FSM is core in the curriculum of many university digital electronic or discrete mathematics subjects. Students often have difficulties grasping the theoretical concepts in the design and analysis of FSM. This has prompted the author to develop an MS-WindowsTM compatible software, WinState, that provides a tutorial style teaching aid for understanding the mechanisms of FSM. The animated computer screen is ideal for visually conveying the required design and analysis procedures. WinState complements other software for combinatorial logic previously developed by the author, and enhances the existing teaching package by adding sequential logic circuits. WinState enables the construction of a students own FSM, which can be simulated, to test the design for functionality and possible errors.
Multi-category micro-milling tool wear monitoring with continuous hidden Markov models
NASA Astrophysics Data System (ADS)
Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon
2009-02-01
In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.
Chriqui, Jamie F; Eidson, Shelby S; Bates, Hannalori; Kowalczyk, Shelly; Chaloupka, Frank J
2008-07-01
Junk food consumption is associated with rising obesity rates in the United States. While a "junk food" specific tax is a potential public health intervention, a majority of states already impose sales taxes on certain junk food and soft drinks. This study reviews the state sales tax variance for soft drinks and selected snack products sold through grocery stores and vending machines as of January 2007. Sales taxes vary by state, intended retail location (grocery store vs. vending machine), and product. Vended snacks and soft drinks are taxed at a higher rate than grocery items and other food products, generally, indicative of a "disfavored" tax status attributed to vended items. Soft drinks, candy, and gum are taxed at higher rates than are other items examined. Similar tax schemes in other countries and the potential implications of these findings relative to the relationship between price and consumption are discussed.
Research Results Of Stress-Strain State Of Cutting Tool When Aviation Materials Turning
NASA Astrophysics Data System (ADS)
Serebrennikova, A. G.; Nikolaeva, E. P.; Savilov, A. V.; Timofeev, S. A.; Pyatykh, A. S.
2018-01-01
Titanium alloys and stainless steels are hard-to-machine of all the machining types. Cutting edge state of turning tool after machining titanium and high-strength aluminium alloys and corrosion-resistant high-alloy steel has been studied. Cutting forces and chip contact arears with the rake surface of cutter has been measured. The relationship of cutting forces and residual stresses are shown. Cutting forces and residual stresses vs value of cutting tool rake angle relation were obtained. Measurements of residual stresses were performed by x-ray diffraction.
Tricker, Christopher; Rock, Adam J; Clark, Gavin I
2016-06-01
In order to enhance our understanding of the nature of poker-machine problem-gambling, a community sample of 37 poker-machine gamblers (M age = 32 years, M PGSI = 5; PGSI = Problem Gambling Severity Index) were assessed for urge to gamble (responses on a visual analogue scale) and altered state of consciousness (assessed by the Altered State of Awareness dimension of the Phenomenology of Consciousness Inventory) at baseline, after a neutral cue, and after a gambling cue. It was found that (a) problem-gambling severity (PGSI score) predicted increase in urge (from neutral cue to gambling cue, controlling for baseline; sr (2) = .19, p = .006) and increase in altered state of consciousness (from neutral cue to gambling cue, controlling for baseline; sr (2) = .57, p < .001), and (b) increase in altered state of consciousness (from neutral cue to gambling cue) mediated the relationship between problem-gambling severity and increase in urge (from neutral cue to gambling cue; κ(2) = .40, 99 % CI [.08, .71]). These findings suggest that cue-reactive altered state of consciousness is an important component of cue-reactive urge in poker-machine problem-gamblers.
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
Acceleration of saddle-point searches with machine learning.
Peterson, Andrew A
2016-08-21
In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.
Acceleration of saddle-point searches with machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Andrew A., E-mail: andrew-peterson@brown.edu
In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the numbermore » of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.« less
NASA Technical Reports Server (NTRS)
Greer, Lawrence (Inventor)
2017-01-01
An apparatus and a computer-implemented method for generating pulses synchronized to a rising edge of a tachometer signal from rotating machinery are disclosed. For example, in one embodiment, a pulse state machine may be configured to generate a plurality of pulses, and a period state machine may be configured to determine a period for each of the plurality of pulses.
TRS-80 at the Maine State Library.
ERIC Educational Resources Information Center
Wismer, Donald
This report describes the applications and work flow of a TRS-80 microcomputer at the Maine State Library, and provides sample computer-generated records and programs used with the TRS-80. The machine was chosen for its price, availability, and compatibility with machines already in Maine's schools. It is used for mailing list management (with…
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth A.; Kim, Hak; Phan, Anthony; Seidleck, Christina
2014-01-01
Finite state-machines (FSMs) are used to control operational flow in application specific integrated circuits (ASICs) and field programmable gate array (FPGA) devices. Because of their ease of interpretation, FSMs simplify the design and verification process and consequently are significant components in a synchronous design.
Analysis of impact energy to fracture un-notched charpy specimens made from railroad tank car steel
DOT National Transportation Integrated Search
2007-09-11
This paper describes a nonlinear finite element analysis : (FEA) framework that examines the impact energy to fracture : unnotched Charpy specimens by an oversized, nonstandard : pendulum impactor called the Bulk Fracture Charpy Machine : (BFCM). The...
NASA Technical Reports Server (NTRS)
Demerdash, N. A. O.
1976-01-01
The modes of operation of the brushless d.c. machine and its corresponding characteristics (current flow, torque-position, etc.) are presented. The foundations and basic principles on which the preliminary numerical model is based, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weeratunga, S K
Ares and Kull are mature code frameworks that support ALE hydrodynamics for a variety of HEDP applications at LLNL, using two widely different meshing approaches. While Ares is based on a 2-D/3-D block-structured mesh data base, Kull is designed to support unstructured, arbitrary polygonal/polyhedral meshes. In addition, both frameworks are capable of running applications on large, distributed-memory parallel machines. Currently, both these frameworks separately support assorted collections of physics packages related to HEDP, including one for the energy deposition by laser/ion-beam ray tracing. This study analyzes the options available for developing a common laser/ion-beam ray tracing package that can bemore » easily shared between these two code frameworks and concludes with a set of recommendations for its development.« less
14 CFR 382.3 - What do the terms in this rule mean?
Code of Federal Regulations, 2014 CFR
2014-01-01
... devices and medications. Automated airport kiosk means a self-service transaction machine that a carrier... machine means a continuous positive airway pressure machine. Department or DOT means the United States..., emotional or mental illness, and specific learning disabilities. The term physical or mental impairment...
Modelling and simulation of effect of ultrasonic vibrations on machining of Ti6Al4V.
Patil, Sandip; Joshi, Shashikant; Tewari, Asim; Joshi, Suhas S
2014-02-01
The titanium alloys cause high machining heat generation and consequent rapid wear of cutting tool edges during machining. The ultrasonic assisted turning (UAT) has been found to be very effective in machining of various materials; especially in the machining of "difficult-to-cut" material like Ti6Al4V. The present work is a comprehensive study involving 2D FE transient simulation of UAT in DEFORM framework and their experimental characterization. The simulation shows that UAT reduces the stress level on cutting tool during machining as compared to that of in continuous turning (CT) barring the penetration stage, wherein both tools are subjected to identical stress levels. There is a 40-45% reduction in cutting forces and about 48% reduction in cutting temperature in UAT over that of in CT. However, the reduction magnitude reduces with an increase in the cutting speed. The experimental analysis of UAT process shows that the surface roughness in UAT is lower than in CT, and the UATed surfaces have matte finish as against the glossy finish on the CTed surfaces. Microstructural observations of the chips and machined surfaces in both processes reveal that the intensity of thermal softening and shear band formation is reduced in UAT over that of in CT. Copyright © 2013 Elsevier B.V. All rights reserved.
Multi-threaded Event Processing with DANA
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Lawrence; Elliott Wolin
2007-05-14
The C++ data analysis framework DANA has been written to support the next generation of Nuclear Physics experiments at Jefferson Lab commensurate with the anticipated 12GeV upgrade. The DANA framework was designed to allow multi-threaded event processing with a minimal impact on developers of reconstruction software. This document describes how DANA implements multi-threaded event processing and compares it to simply running multiple instances of a program. Also presented are relative reconstruction rates for Pentium4, Xeon, and Opteron based machines.
Concerns of Hydrothermal Degradation in CAD/CAM Zirconia
Kim, J.-W.; Covel, N.S.; Guess, P.C.; Rekow, E.D.; Zhang, Y.
2010-01-01
Zirconia-based restorations are widely used in prosthetic dentistry; however, their susceptibility to hydrothermal degradation remains elusive. We hypothesized that CAD/CAM machining and subsequent surface treatments, i.e., grinding and/or grit-blasting, have marked effects on the hydrothermal degradation behavior of Y-TZP. CAD/CAM-machined Y-TZP plates (0.5 mm thick), both with and without subsequent grinding with various grit sizes or grit-blasting with airborne alumina particles, were subjected to accelerated aging tests in a steam autoclave. Results showed that the CAD/CAM-machined surfaces initially exhibited superior hydrothermal degradation resistance, but deteriorated at a faster rate upon prolonged autoclave treatment compared with ground and grit-blasted surfaces. The accelerated hydrothermal degradation of CAD/CAM surfaces is attributed to the CAD/CAM machining damage and the absence of surface compressive stresses in the fully sintered material. Clinical relevance for surface treatments of zirconia frameworks in terms of hydrothermal and structural stabilities is addressed. PMID:19966039
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
A collaborative framework for Distributed Privacy-Preserving Support Vector Machine learning.
Que, Jialan; Jiang, Xiaoqian; Ohno-Machado, Lucila
2012-01-01
A Support Vector Machine (SVM) is a popular tool for decision support. The traditional way to build an SVM model is to estimate parameters based on a centralized repository of data. However, in the field of biomedicine, patient data are sometimes stored in local repositories or institutions where they were collected, and may not be easily shared due to privacy concerns. This creates a substantial barrier for researchers to effectively learn from the distributed data using machine learning tools like SVMs. To overcome this difficulty and promote efficient information exchange without sharing sensitive raw data, we developed a Distributed Privacy Preserving Support Vector Machine (DPP-SVM). The DPP-SVM enables privacy-preserving collaborative learning, in which a trusted server integrates "privacy-insensitive" intermediary results. The globally learned model is guaranteed to be exactly the same as learned from combined data. We also provide a free web-service (http://privacy.ucsd.edu:8080/ppsvm/) for multiple participants to collaborate and complete the SVM-learning task in an efficient and privacy-preserving manner.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
Antibiotic Residues in Milk from Three Popular Kenyan Milk Vending Machines.
Kosgey, Amos; Shitandi, Anakalo; Marion, Jason W
2018-05-01
Milk vending machines (MVMs) are growing in popularity in Kenya and worldwide. Milk vending machines dispense varying quantities of locally sourced, pasteurized milk. The Kenya Dairy Board has a regulatory framework, but surveillance is weak because of several factors. Milk vending machines' milk is not routinely screened for antibiotics, thereby increasing potential for antibiotic misuse. To investigate, a total of 80 milk samples from four commercial providers ( N = 25), street vendors ( N = 21), and three MVMs ( N = 34) were collected and screened in Eldoret, Kenya. Antibiotic residue surveillance occurred during December 2016 and January 2017 using Idexx SNAP ® tests for tetracyclines, sulfamethazine, beta-lactams, and gentamicin. Overall, 24% of MVM samples and 24% of street vendor samples were presumably positive for at least one antibiotic. No commercial samples were positive. Research into cost-effective screening methods and increased monitoring by food safety agencies are needed to uphold hazard analysis and critical control point for improving antibiotic stewardship throughout the Kenyan private dairy industry.
Behavioral Profiling of Scada Network Traffic Using Machine Learning Algorithms
2014-03-27
BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS THESIS Jessica R. Werling, Captain, USAF AFIT-ENG-14-M-81 DEPARTMENT...subject to copyright protection in the United States. AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ...AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS Jessica R. Werling, B.S.C.S. Captain, USAF Approved
Torii, Manabu; Yin, Lanlan; Nguyen, Thang; Mazumdar, Chand T.; Liu, Hongfang; Hartley, David M.; Nelson, Noele P.
2014-01-01
Purpose Early detection of infectious disease outbreaks is crucial to protecting the public health of a society. Online news articles provide timely information on disease outbreaks worldwide. In this study, we investigated automated detection of articles relevant to disease outbreaks using machine learning classifiers. In a real-life setting, it is expensive to prepare a training data set for classifiers, which usually consists of manually labeled relevant and irrelevant articles. To mitigate this challenge, we examined the use of randomly sampled unlabeled articles as well as labeled relevant articles. Methods Naïve Bayes and Support Vector Machine (SVM) classifiers were trained on 149 relevant and 149 or more randomly sampled unlabeled articles. Diverse classifiers were trained by varying the number of sampled unlabeled articles and also the number of word features. The trained classifiers were applied to 15 thousand articles published over 15 days. Top-ranked articles from each classifier were pooled and the resulting set of 1337 articles was reviewed by an expert analyst to evaluate the classifiers. Results Daily averages of areas under ROC curves (AUCs) over the 15-day evaluation period were 0.841 and 0.836, respectively, for the naïve Bayes and SVM classifier. We referenced a database of disease outbreak reports to confirm that this evaluation data set resulted from the pooling method indeed covered incidents recorded in the database during the evaluation period. Conclusions The proposed text classification framework utilizing randomly sampled unlabeled articles can facilitate a cost-effective approach to training machine learning classifiers in a real-life Internet-based biosurveillance project. We plan to examine this framework further using larger data sets and using articles in non-English languages. PMID:21134784
Blum, Emily S; Porras, Antonio R; Biggs, Elijah; Tabrizi, Pooneh R; Sussman, Rachael D; Sprague, Bruce M; Shalaby-Rana, Eglal; Majd, Massoud; Pohl, Hans G; Linguraru, Marius George
2017-10-21
We sought to define features that describe the dynamic information in diuresis renograms for the early detection of clinically significant hydronephrosis caused by ureteropelvic junction obstruction. We studied the diuresis renogram of 55 patients with a mean ± SD age of 75 ± 66 days who had congenital hydronephrosis at initial presentation. Five patients had bilaterally affected kidneys for a total of 60 diuresis renograms. Surgery was performed on 35 kidneys. We extracted 45 features based on curve shape and wavelet analysis from the drainage curves recorded after furosemide administration. The optimal features were selected as the combination that maximized the ROC AUC obtained from a linear support vector machine classifier trained to classify patients as with or without obstruction. Using these optimal features we performed leave 1 out cross validation to estimate the accuracy, sensitivity and specificity of our framework. Results were compared to those obtained using post-diuresis drainage half-time and the percent of clearance after 30 minutes. Our framework had 93% accuracy, including 91% sensitivity and 96% specificity, to predict surgical cases. This was a significant improvement over the same accuracy of 82%, including 71% sensitivity and 96% specificity obtained from half-time and 30-minute clearance using the optimal thresholds of 24.57 minutes and 55.77%, respectively. Our machine learning framework significantly improved the diagnostic accuracy of clinically significant hydronephrosis compared to half-time and 30-minute clearance. This aids in the clinical decision making process by offering a tool for earlier detection of severe cases and it has the potential to reduce the number of diuresis renograms required for diagnosis. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
Hybrid BCI approach to control an artificial tibio-femoral joint.
Mercado, Luis; Rodriguez-Linan, Angel; Torres-Trevino, Luis M; Quiroz, G
2016-08-01
Brain-Computer Interfaces (BCIs) for disabled people should allow them to use their remaining functionalities as control possibilities. BCIs connect the brain with external devices to perform the volition or intent of movement, regardless if that individual is unable to perform the task due to body impairments. In this work we fuse electromyographic (EMG) with electroencephalographic (EEG) activity in a framework called "Hybrid-BCI" (hBCI) approach to control the movement of a simulated tibio-femoral joint. Two mathematical models of a tibio-femoral joint are used to emulate the kinematic and dynamic behavior of the knee. The interest is to reproduce different velocities of the human gait cycle. The EEG signals are used to classify the user intent, which are the velocity changes, meanwhile the superficial EMG signals are used to estimate the amplitude of such intent. A multi-level controller is used to solve the trajectory tracking problem involved. The lower level consists of an individual controller for each model, it solves the tracking of the desired trajectory even considering different velocities of the human gait cycle. The mid-level uses a combination of a logical operator and a finite state machine for the switching between models. Finally, the highest level consists in a support vector machine to classify the desired activity.
Van Landeghem, Sofie; Abeel, Thomas; Saeys, Yvan; Van de Peer, Yves
2010-09-15
In the field of biomolecular text mining, black box behavior of machine learning systems currently limits understanding of the true nature of the predictions. However, feature selection (FS) is capable of identifying the most relevant features in any supervised learning setting, providing insight into the specific properties of the classification algorithm. This allows us to build more accurate classifiers while at the same time bridging the gap between the black box behavior and the end-user who has to interpret the results. We show that our FS methodology successfully discards a large fraction of machine-generated features, improving classification performance of state-of-the-art text mining algorithms. Furthermore, we illustrate how FS can be applied to gain understanding in the predictions of a framework for biomolecular event extraction from text. We include numerous examples of highly discriminative features that model either biological reality or common linguistic constructs. Finally, we discuss a number of insights from our FS analyses that will provide the opportunity to considerably improve upon current text mining tools. The FS algorithms and classifiers are available in Java-ML (http://java-ml.sf.net). The datasets are publicly available from the BioNLP'09 Shared Task web site (http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/SharedTask/).
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
Vending machine policies and practices in Delaware.
Gemmill, Erin; Cotugna, Nancy
2005-04-01
Overweight has reached alarming proportions among America's youth. Although the cause of the rise in overweight rates in children and adolescents is certainly the result of the interaction of a variety of factors, the presence of vending machines in schools is one issue that has recently come to the forefront. Many states have passed or proposed legislation that limits student access to vending machines in schools or require that vending machines in schools offer healthier choices. The purposes of this study were (a) to assess the food and beverage vending machine offerings in the public school districts in the state of Delaware and (b) to determine whether there are any district vending policies in place other than the current U.S. Department of Agriculture regulations. The results of this study indicate the most commonly sold food and drink items in school vending machines are of minimal nutritional value. School administrators are most frequently in charge of the vending contract, as well as setting and enforcing vending machine policies. Suggestions are offered to assist school nurses, often the only health professional in the school, in becoming advocates for changes in school vending practices and policies that promote the health and well-being of children and adolescents.
Machine learnt bond order potential to model metal-organic (Co-C) heterostructures.
Narayanan, Badri; Chan, Henry; Kinaci, Alper; Sen, Fatih G; Gray, Stephen K; Chan, Maria K Y; Sankaranarayanan, Subramanian K R S
2017-11-30
A fundamental understanding of the inter-relationships between structure, morphology, atomic scale dynamics, chemistry, and physical properties of mixed metallic-covalent systems is essential to design novel functional materials for applications in flexible nano-electronics, energy storage and catalysis. To achieve such knowledge, it is imperative to develop robust and computationally efficient atomistic models that describe atomic interactions accurately within a single framework. Here, we present a unified Tersoff-Brenner type bond order potential (BOP) for a Co-C system, trained against lattice parameters, cohesive energies, equation of state, and elastic constants of different crystalline phases of cobalt as well as orthorhombic Co 2 C derived from density functional theory (DFT) calculations. The independent BOP parameters are determined using a combination of supervised machine learning (genetic algorithms) and local minimization via the simplex method. Our newly developed BOP accurately describes the structural, thermodynamic, mechanical, and surface properties of both the elemental components as well as the carbide phases, in excellent accordance with DFT calculations and experiments. Using our machine-learnt BOP potential, we performed large-scale molecular dynamics simulations to investigate the effect of metal/carbon concentration on the structure and mechanical properties of porous architectures obtained via self-assembly of cobalt nanoparticles and fullerene molecules. Such porous structures have implications in flexible electronics, where materials with high electrical conductivity and low elastic stiffness are desired. Using unsupervised machine learning (clustering), we identify the pore structure, pore-distribution, and metallic conduction pathways in self-assembled structures at different C/Co ratios. We find that as the C/Co ratio increases, the connectivity between the Co nanoparticles becomes limited, likely resulting in low electrical conductivity; on the other hand, such C-rich hybrid structures are highly flexible (i.e., low stiffness). The BOP model developed in this work is a valuable tool to investigate atomic scale processes, structure-property relationships, and temperature/pressure response of Co-C systems, as well as design organic-inorganic hybrid structures with a desired set of properties.
Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk
2014-10-20
Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries. This integration improves the efficiency of the clinical diagnostic model research workflow. Moreover, by simplifying the generation of learning curves, CDM enables study coordinators to assess more accurately when data collection can be terminated, resulting in better models or lower patient recruitment costs.
A state-based approach to trend recognition and failure prediction for the Space Station Freedom
NASA Technical Reports Server (NTRS)
Nelson, Kyle S.; Hadden, George D.
1992-01-01
A state-based reasoning approach to trend recognition and failure prediction for the Altitude Determination, and Control System (ADCS) of the Space Station Freedom (SSF) is described. The problem domain is characterized by features (e.g., trends and impending failures) that develop over a variety of time spans, anywhere from several minutes to several years. Our state-based reasoning approach, coupled with intelligent data screening, allows features to be tracked as they develop in a time-dependent manner. That is, each state machine has the ability to encode a time frame for the feature it detects. As features are detected, they are recorded and can be used as input to other state machines, creating a hierarchical feature recognition scheme. Furthermore, each machine can operate independently of the others, allowing simultaneous tracking of features. State-based reasoning was implemented in the trend recognition and the prognostic modules of a prototype Space Station Freedom Maintenance and Diagnostic System (SSFMDS) developed at Honeywell's Systems and Research Center.
Identification of Cell Cycle-Regulated Genes by Convolutional Neural Network.
Liu, Chenglin; Cui, Peng; Huang, Tao
2017-01-01
The cell cycle-regulated genes express periodically with the cell cycle stages, and the identification and study of these genes can provide a deep understanding of the cell cycle process. Large false positives and low overlaps are big problems in cell cycle-regulated gene detection. Here, a computational framework called DLGene was proposed for cell cycle-regulated gene detection. It is based on the convolutional neural network, a deep learning algorithm representing raw form of data pattern without assumption of their distribution. First, the expression data was transformed to categorical state data to denote the changing state of gene expression, and four different expression patterns were revealed for the reported cell cycle-regulated genes. Then, DLGene was applied to discriminate the non-cell cycle gene and the four subtypes of cell cycle genes. Its performances were compared with six traditional machine learning methods. At last, the biological functions of representative cell cycle genes for each subtype are analyzed. Our method showed better and more balanced performance of sensitivity and specificity comparing to other machine learning algorithms. The cell cycle genes had very different expression pattern with non-cell cycle genes and among the cell-cycle genes, there were four subtypes. Our method not only detects the cell cycle genes, but also describes its expression pattern, such as when its highest expression level is reached and how it changes with time. For each type, we analyzed the biological functions of the representative genes and such results provided novel insight to the cell cycle mechanisms. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Degree of coupling and efficiency of energy converters far-from-equilibrium
NASA Astrophysics Data System (ADS)
Vroylandt, Hadrien; Lacoste, David; Verley, Gatien
2018-02-01
In this paper, we introduce a real symmetric and positive semi-definite matrix, which we call the non-equilibrium conductance matrix, and which generalizes the Onsager response matrix for a system in a non-equilibrium stationary state. We then express the thermodynamic efficiency in terms of the coefficients of this matrix using a parametrization similar to the one used near equilibrium. This framework, then valid arbitrarily far from equilibrium allows to set bounds on the thermodynamic efficiency by a universal function depending only on the degree of coupling between input and output currents. It also leads to new general power-efficiency trade-offs valid for macroscopic machines that are compared to trade-offs previously obtained from uncertainty relations. We illustrate our results on an unicycle heat to heat converter and on a discrete model of a molecular motor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav
Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Survey of Analysis of Crime Detection Techniques Using Data Mining and Machine Learning
NASA Astrophysics Data System (ADS)
Prabakaran, S.; Mitra, Shilpa
2018-04-01
Data mining is the field containing procedures for finding designs or patterns in a huge dataset, it includes strategies at the convergence of machine learning and database framework. It can be applied to various fields like future healthcare, market basket analysis, education, manufacturing engineering, crime investigation etc. Among these, crime investigation is an interesting application to process crime characteristics to help the society for a better living. This paper survey various data mining techniques used in this domain. This study may be helpful in designing new strategies for crime prediction and analysis.