Study on intelligent processing system of man-machine interactive garment frame model
NASA Astrophysics Data System (ADS)
Chen, Shuwang; Yin, Xiaowei; Chang, Ruijiang; Pan, Peiyun; Wang, Xuedi; Shi, Shuze; Wei, Zhongqian
2018-05-01
A man-machine interactive garment frame model intelligent processing system is studied in this paper. The system consists of several sensor device, voice processing module, mechanical parts and data centralized acquisition devices. The sensor device is used to collect information on the environment changes brought by the body near the clothes frame model, the data collection device is used to collect the information of the environment change induced by the sensor device, voice processing module is used for speech recognition of nonspecific person to achieve human-machine interaction, mechanical moving parts are used to make corresponding mechanical responses to the information processed by data collection device.it is connected with data acquisition device by a means of one-way connection. There is a one-way connection between sensor device and data collection device, two-way connection between data acquisition device and voice processing module. The data collection device is one-way connection with mechanical movement parts. The intelligent processing system can judge whether it needs to interact with the customer, realize the man-machine interaction instead of the current rigid frame model.
NASA Technical Reports Server (NTRS)
Nehl, T. W.; Demerdash, N. A.
1983-01-01
Mathematical models capable of simulating the transient, steady state, and faulted performance characteristics of various brushless dc machine-PSA (power switching assembly) configurations were developed. These systems are intended for possible future use as primemovers in EMAs (electromechanical actuators) for flight control applications. These machine-PSA configurations include wye, delta, and open-delta connected systems. The research performed under this contract was initially broken down into the following six tasks: development of mathematical models for various machine-PSA configurations; experimental validation of the model for failure modes; experimental validation of the mathematical model for shorted turn-failure modes; tradeoff study; and documentation of results and methodology.
30 CFR 18.49 - Connection boxes on machines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Connection boxes on machines. 18.49 Section 18... Design Requirements § 18.49 Connection boxes on machines. Connection boxes used to facilitate replacement of cables or machine components shall be explosion-proof. Portable-cable terminals on cable reels...
Recent R&D status for 70 MW class superconducting generators in the Super-GM project
NASA Astrophysics Data System (ADS)
Ageta, Takasuke
2000-05-01
Three types of 70 MW class superconducting generators called model machines have been developed to establish basic technologies for a pilot machine. The series of on-site verification tests was completed in June 1999. The world's highest generator output (79 MW), the world's longest continuous operation (1500 hours) and other excellent results were obtained. The model machine was connected to a commercial power grid and fundamental data were collected for future utilization. It is expected that fundamental technologies on design and manufacture required for a 200 MW class pilot machine are established.
Single bus star connected reluctance drive and method
Fahimi, Babak; Shamsi, Pourya
2016-05-10
A system and methods for operating a switched reluctance machine includes a controller, an inverter connected to the controller and to the switched reluctance machine, a hysteresis control connected to the controller and to the inverter, a set of sensors connected to the switched reluctance machine and to the controller, the switched reluctance machine further including a set of phases the controller further comprising a processor and a memory connected to the processor, wherein the processor programmed to execute a control process and a generation process.
Reverse time migration: A seismic processing application on the connection machine
NASA Technical Reports Server (NTRS)
Fiebrich, Rolf-Dieter
1987-01-01
The implementation of a reverse time migration algorithm on the Connection Machine, a massively parallel computer is described. Essential architectural features of this machine as well as programming concepts are presented. The data structures and parallel operations for the implementation of the reverse time migration algorithm are described. The algorithm matches the Connection Machine architecture closely and executes almost at the peak performance of this machine.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
Predictive modeling for corrective maintenance of imaging devices from machine logs.
Patil, Ravindra B; Patil, Meru A; Ravi, Vidya; Naik, Sarif
2017-07-01
In the cost sensitive healthcare industry, an unplanned downtime of diagnostic and therapy imaging devices can be a burden on the financials of both the hospitals as well as the original equipment manufacturers (OEMs). In the current era of connectivity, it is easier to get these devices connected to a standard monitoring station. Once the system is connected, OEMs can monitor the health of these devices remotely and take corrective actions by providing preventive maintenance thereby avoiding major unplanned downtime. In this article, we present an overall methodology of predicting failure of these devices well before customer experiences it. We use data-driven approach based on machine learning to predict failures in turn resulting in reduced machine downtime, improved customer satisfaction and cost savings for the OEMs. One of the use-case of predicting component failure of PHILIPS iXR system is explained in this article.
Investigation of Combined Motor/Magnetic Bearings for Flywheel Energy Storage Systems
NASA Technical Reports Server (NTRS)
Hofmann, Heath
2003-01-01
Dr. Hofmann's work in the summer of 2003 consisted of two separate projects. In the first part of the summer, Dr. Hofmann prepared and collected information regarding rotor losses in synchronous machines; in particular, machines with low rotor losses operating in vacuum and supported by magnetic bearings, such as the motor/generator for flywheel energy storage systems. This work culminated in a presentation at NASA Glenn Research Center on this topic. In the second part, Dr. Hofmann investigated an approach to flywheel energy storage where the phases of the flywheel motor/generator are connected in parallel with the phases of an induction machine driving a mechanical actuator. With this approach, additional power electronics for driving the flywheel unit are not required. Simulations of the connection of a flywheel energy storage system to a model of an electromechanical actuator testbed at NASA Glenn were performed that validated the proposed approach. A proof-of-concept experiment using the D1 flywheel unit at NASA Glenn and a Sundstrand induction machine connected to a dynamometer was successfully conducted.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Quantum-assisted learning of graphical models with arbitrary pairwise connectivity
NASA Astrophysics Data System (ADS)
Realpe-Gómez, John; Benedetti, Marcello; Biswas, Rupak; Perdomo-Ortiz, Alejandro
Mainstream machine learning techniques rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speedup these tasks. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful machine learning models. Here we show how to surpass this `curse of limited connectivity' bottleneck and illustrate our findings by training probabilistic generative models with arbitrary pairwise connectivity on a real dataset of handwritten digits and two synthetic datasets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding Boltzmann-like distribution. Therefore, the need to infer the effective temperature at each iteration is avoided, speeding up learning, and the effect of noise in the control parameters is mitigated, improving accuracy. This work was supported in part by NASA, AFRL, ODNI, and IARPA.
A distributed algorithm for machine learning
NASA Astrophysics Data System (ADS)
Chen, Shihong
2018-04-01
This paper considers a distributed learning problem in which a group of machines in a connected network, each learning its own local dataset, aim to reach a consensus at an optimal model, by exchanging information only with their neighbors but without transmitting data. A distributed algorithm is proposed to solve this problem under appropriate assumptions.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
An assessment of the connection machine
NASA Technical Reports Server (NTRS)
Schreiber, Robert
1990-01-01
The CM-2 is an example of a connection machine. The strengths and problems of this implementation are considered as well as important issues in the architecture and programming environment of connection machines in general. These are contrasted to the same issues in Multiple Instruction/Multiple Data (MIMD) microprocessors and multicomputers.
Equivalent model of a dually-fed machine for electric drive control systems
NASA Astrophysics Data System (ADS)
Ostrovlyanchik, I. Yu; Popolzin, I. Yu
2018-05-01
The article shows that the mathematical model of a dually-fed machine is complicated because of the presence of a controlled voltage source in the rotor circuit. As a method of obtaining a mathematical model, the method of a generalized two-phase electric machine is applied and a rotating orthogonal coordinate system is chosen that is associated with the representing vector of a stator current. In the chosen coordinate system in the operator form the differential equations of electric equilibrium for the windings of the generalized machine (the Kirchhoff equation) are written together with the expression for the moment, which determines the electromechanical energy transformation in the machine. Equations are transformed so that they connect the currents of the windings, that determine the moment of the machine, and the voltages on these windings. The structural diagram of the machine is assigned to the written equations. Based on the written equations and accepted assumptions, expressions were obtained for the balancing the EMF of windings, and on the basis of these expressions an equivalent mathematical model of a dually-fed machine is proposed, convenient for use in electric drive control systems.
Highly parallel sparse Cholesky factorization
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.
High Speed Turbo-Generator: Test Stand Simulator Including Turbine Engine Emulator
2010-07-30
15% Shaft Power 4% 8% Our model of the six-phase synchronous machine was based on work by Schiferl and Ong [1]. The six-phase synchronous machine is...develop and submit to ONR a follow-on proposal to address these open issues. 27 REFERENCES [1] R. F. Schiferl and C. M. Ong, "Six phase...at 32 References [Al] R. F. Schiferl and C. M. Ong, "Six phase synchronous machine with ac and dc stator connections, Part I: Equivalent Circuit
Tool wear modeling using abductive networks
NASA Astrophysics Data System (ADS)
Masory, Oren
1992-09-01
A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Object as a model of intelligent robot in the virtual workspace
NASA Astrophysics Data System (ADS)
Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.
2015-11-01
The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The results can be further used during designing of the complete manufacturing system, which takes into account all the involved machines and has the form of an object-oriented model.
Application of Elements of TPM Strategy for Operation Analysis of Mining Machine
NASA Astrophysics Data System (ADS)
Brodny, Jaroslaw; Tutak, Magdalena
2017-12-01
Total Productive Maintenance (TPM) strategy includes group of activities and actions in order to maintenance machines in failure-free state and without breakdowns thanks to tending limitation of failures, non-planned shutdowns, lacks and non-planned service of machines. These actions are ordered to increase effectiveness of utilization of possessed devices and machines in company. Very significant element of this strategy is connection of technical actions with changes in their perception by employees. Whereas fundamental aim of introduction this strategy is improvement of economic efficiency of enterprise. Increasing competition and necessity of reduction of production costs causes that also mining enterprises are forced to introduce this strategy. In the paper examples of use of OEE model for quantitative evaluation of selected mining devices were presented. OEE model is quantitative tool of TPM strategy and can be the base for further works connected with its introduction. OEE indicator is the product of three components which include availability and performance of the studied machine and the quality of the obtained product. The paper presents the results of the effectiveness analysis of the use of a set of mining machines included in the longwall system, which is the first and most important link in the technological line of coal production. The set of analyzed machines included the longwall shearer, armored face conveyor and cruscher. From a reliability point of view, the analyzed set of machines is a system that is characterized by the serial structure. The analysis was based on data recorded by the industrial automation system used in the mines. This method of data acquisition ensured their high credibility and a full time synchronization. Conclusions from the research and analyses should be used to reduce breakdowns, failures and unplanned downtime, increase performance and improve production quality.
Dynamic brain connectivity is a better predictor of PTSD than static connectivity.
Jin, Changfeng; Jia, Hao; Lanka, Pradyumna; Rangaprakash, D; Li, Lingjiang; Liu, Tianming; Hu, Xiaoping; Deshpande, Gopikrishna
2017-09-01
Using resting-state functional magnetic resonance imaging, we test the hypothesis that subjects with post-traumatic stress disorder (PTSD) are characterized by reduced temporal variability of brain connectivity compared to matched healthy controls. Specifically, we test whether PTSD is characterized by elevated static connectivity, coupled with decreased temporal variability of those connections, with the latter providing greater sensitivity toward the pathology than the former. Static functional connectivity (FC; nondirectional zero-lag correlation) and static effective connectivity (EC; directional time-lagged relationships) were obtained over the entire brain using conventional models. Dynamic FC and dynamic EC were estimated by letting the conventional models to vary as a function of time. Statistical separation and discriminability of these metrics between the groups and their ability to accurately predict the diagnostic label of a novel subject were ascertained using separate support vector machine classifiers. Our findings support our hypothesis that PTSD subjects have stronger static connectivity, but reduced temporal variability of connectivity. Further, machine learning classification accuracy obtained with dynamic FC and dynamic EC was significantly higher than that obtained with static FC and static EC, respectively. Furthermore, results also indicate that the ease with which brain regions engage or disengage with other regions may be more sensitive to underlying pathology than the strength with which they are engaged. Future studies must examine whether this is true only in the case of PTSD or is a general organizing principle in the human brain. Hum Brain Mapp 38:4479-4496, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
29 CFR 1910.254 - Arc welding and cutting.
Code of Federal Regulations, 2011 CFR
2011-07-01
... adequate current collecting devices. (v) All ground connections shall be checked to determine that they are mechanically strong and electrically adequate for the required current. (3) Supply connections and conductors... for connection to a portable welding machine. (ii) For individual welding machines, the rated current...
Parallel Algorithms for Computer Vision
1990-04-01
NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How
Programming and machining of complex parts based on CATIA solid modeling
NASA Astrophysics Data System (ADS)
Zhu, Xiurong
2017-09-01
The complex parts of the use of CATIA solid modeling programming and simulation processing design, elaborated in the field of CNC machining, programming and the importance of processing technology. In parts of the design process, first make a deep analysis on the principle, and then the size of the design, the size of each chain, connected to each other. After the use of backstepping and a variety of methods to calculate the final size of the parts. In the selection of parts materials, careful study, repeated testing, the final choice of 6061 aluminum alloy. According to the actual situation of the processing site, it is necessary to make a comprehensive consideration of various factors in the machining process. The simulation process should be based on the actual processing, not only pay attention to shape. It can be used as reference for machining.
Li, Siqi; Jiang, Huiyan; Pang, Wenbo
2017-05-01
Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Data Parallel Multizone Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1995-01-01
We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.
Development of 70 MW class superconducting generator with quick-response excitation
NASA Astrophysics Data System (ADS)
Miyaike, Kiyoshi; Kitajima, Toshio; Ito, Tetsuo
2002-03-01
The development of a superconducting generator had been carried out for 12 years under the first stage of a Super GM project. The 70 MW class model machine with quick response excitation was manufactured and evaluated in the project. This type of superconducting generator improves power system stability against rapid load fluctuations at the power system faults. This model machine achieved all development targets including high stability during rapid excitation control. It was also connected to the actual 77 kV electrical power grid as a synchronous condenser and proved advantages and high-operation reliability of the superconducting generator.
A microcomputer network for control of a continuous mining machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-12-31
This report details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines and installed on a continuous mining machine. The network consists of microcomputers that are connected together via a single twisted-pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers, in conjunction with the appropriate sensors, provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and control the continuous miningmore » machine. Because of the network`s generic structure, it can be installed on most mining machines.« less
Quick-Turn Finite Element Analysis for Plug-and-Play Satellite Structures
2007-03-01
produced from 0.375 inch round stock and turned on a machine lathe to achieve the shoulder feature and drilled to make it hollow. Figure 3.1...component, a linear taper was machined from the connection shoulder to the solar panel connecting fork. The part was then turned using the machine lathe ...utilizing a modern five-axis Computer Numerical Code ( CNC ) machine mill, the process time could be reduced by as much as seventy-five percent and the
Multilevel-Dc-Bus Inverter For Providing Sinusoidal And Pwm Electrical Machine Voltages
Su, Gui-Jia [Knoxville, TN
2005-11-29
A circuit for controlling an ac machine comprises a full bridge network of commutation switches which are connected to supply current for a corresponding voltage phase to the stator windings, a plurality of diodes, each in parallel connection to a respective one of the commutation switches, a plurality of dc source connections providing a multi-level dc bus for the full bridge network of commutation switches to produce sinusoidal voltages or PWM signals, and a controller connected for control of said dc source connections and said full bridge network of commutation switches to output substantially sinusoidal voltages to the stator windings. With the invention, the number of semiconductor switches is reduced to m+3 for a multi-level dc bus having m levels. A method of machine control is also disclosed.
Machine Learning for Knowledge Extraction from PHR Big Data.
Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George
2014-01-01
Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.
Machine Learning Technique to Find Quantum Many-Body Ground States of Bosons on a Lattice
NASA Astrophysics Data System (ADS)
Saito, Hiroki; Kato, Masaya
2018-01-01
We have developed a variational method to obtain many-body ground states of the Bose-Hubbard model using feedforward artificial neural networks. A fully connected network with a single hidden layer works better than a fully connected network with multiple hidden layers, and a multilayer convolutional network is more efficient than a fully connected network. AdaGrad and Adam are optimization methods that work well. Moreover, we show that many-body ground states with different numbers of particles can be generated by a single network.
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, F. R.; Funk, C.
2014-01-01
Hidden Markov models can be used to investigate structure of subseasonal variability. East African short rain variability has connections to large-scale tropical variability. MJO - Intraseasonal variations connected with appearance of "wet" and "dry" states. ENSO/IOZM SST and circulation anomalies are apparent during years of anomalous residence time in the subseasonal "wet" state. Similar results found in previous studies, but we can interpret this with respect to variations of subseasonal wet and dry modes. Reveal underlying connections between MJO/IOZM/ENSO with respect to East African rainfall.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Multi-category micro-milling tool wear monitoring with continuous hidden Markov models
NASA Astrophysics Data System (ADS)
Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon
2009-02-01
In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.
Microcomputer network for control of a continuous mining machine. Information circular/1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffbauer, W.H.
1993-01-01
The paper details a microcomputer-based control and monitoring network that was developed in-house by the U.S. Bureau of Mines, and installed on a Joy 14 continuous mining machine. The network consists of microcomputers that are connected together via a single twisted pair cable. Each microcomputer was developed to provide a particular function in the control process. Machine-mounted microcomputers in conjunction with the appropriate sensors provide closed-loop control of the machine, navigation, and environmental monitoring. Off-the-machine microcomputers provide remote control of the machine, sensor status, and a connection to the network so that external computers can access network data and controlmore » the continuous mining machine. Although the network was installed on a Joy 14 continuous mining machine, its use extends beyond it. Its generic structure lends itself to installation onto most mining machine types.« less
Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro
2015-01-01
The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.
Exploring the potential of machine learning to break deadlock in convection parameterization
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Gentine, P.
2017-12-01
We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
NASA Astrophysics Data System (ADS)
Koptev, V. Yu
2017-02-01
The work represents the results of studying basic interconnected criteria of separate equipment units of the transport network machines fleet, depending on production and mining factors to improve the transport systems management. Justifying the selection of a control system necessitates employing new methodologies and models, augmented with stability and transport flow criteria, accounting for mining work development dynamics on mining sites. A necessary condition is the accounting of technical and operating parameters related to vehicle operation. Modern open pit mining dispatching systems must include such kinds of the information database. An algorithm forming a machine fleet is presented based on multi-variation task solution in connection with defining reasonable operating features of a machine working as a part of a complex. Proposals cited in the work may apply to mining machines (drilling equipment, excavators) and construction equipment (bulldozers, cranes, pile-drivers), city transport and other types of production activities using machine fleet.
30 CFR 18.49 - Connection boxes on machines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Connection boxes on machines. 18.49 Section 18.49 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Construction and...
High-Throughput Gene Expression Profiles to Define Drug Similarity and Predict Compound Activity.
De Wolf, Hans; Cougnaud, Laure; Van Hoorde, Kirsten; De Bondt, An; Wegner, Joerg K; Ceulemans, Hugo; Göhlmann, Hinrich
2018-04-01
By adding biological information, beyond the chemical properties and desired effect of a compound, uncharted compound areas and connections can be explored. In this study, we add transcriptional information for 31K compounds of Janssen's primary screening deck, using the HT L1000 platform and assess (a) the transcriptional connection score for generating compound similarities, (b) machine learning algorithms for generating target activity predictions, and (c) the scaffold hopping potential of the resulting hits. We demonstrate that the transcriptional connection score is best computed from the significant genes only and should be interpreted within its confidence interval for which we provide the stats. These guidelines help to reduce noise, increase reproducibility, and enable the separation of specific and promiscuous compounds. The added value of machine learning is demonstrated for the NR3C1 and HSP90 targets. Support Vector Machine models yielded balanced accuracy values ≥80% when the expression values from DDIT4 & SERPINE1 and TMEM97 & SPR were used to predict the NR3C1 and HSP90 activity, respectively. Combining both models resulted in 22 new and confirmed HSP90-independent NR3C1 inhibitors, providing two scaffolds (i.e., pyrimidine and pyrazolo-pyrimidine), which could potentially be of interest in the treatment of depression (i.e., inhibiting the glucocorticoid receptor (i.e., NR3C1), while leaving its chaperone, HSP90, unaffected). As such, the initial hit rate increased by a factor 300, as less, but more specific chemistry could be screened, based on the upfront computed activity predictions.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Whole brain white matter connectivity analysis using machine learning: An application to autism.
Zhang, Fan; Savadjiev, Peter; Cai, Weidong; Song, Yang; Rathi, Yogesh; Tunç, Birkan; Parker, Drew; Kapur, Tina; Schultz, Robert T; Makris, Nikos; Verma, Ragini; O'Donnell, Lauren J
2018-05-15
In this paper, we propose an automated white matter connectivity analysis method for machine learning classification and characterization of white matter abnormality via identification of discriminative fiber tracts. The proposed method uses diffusion MRI tractography and a data-driven approach to find fiber clusters corresponding to subdivisions of the white matter anatomy. Features extracted from each fiber cluster describe its diffusion properties and are used for machine learning. The method is demonstrated by application to a pediatric neuroimaging dataset from 149 individuals, including 70 children with autism spectrum disorder (ASD) and 79 typically developing controls (TDC). A classification accuracy of 78.33% is achieved in this cross-validation study. We investigate the discriminative diffusion features based on a two-tensor fiber tracking model. We observe that the mean fractional anisotropy from the second tensor (associated with crossing fibers) is most affected in ASD. We also find that local along-tract (central cores and endpoint regions) differences between ASD and TDC are helpful in differentiating the two groups. These altered diffusion properties in ASD are associated with multiple robustly discriminative fiber clusters, which belong to several major white matter tracts including the corpus callosum, arcuate fasciculus, uncinate fasciculus and aslant tract; and the white matter structures related to the cerebellum, brain stem, and ventral diencephalon. These discriminative fiber clusters, a small part of the whole brain tractography, represent the white matter connections that could be most affected in ASD. Our results indicate the potential of a machine learning pipeline based on white matter fiber clustering. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Hybrid-secondary uncluttered induction machine
Hsu, John S.
2001-01-01
An uncluttered secondary induction machine (100) includes an uncluttered rotating transformer (66) which is mounted on the same shaft as the rotor (73) of the induction machine. Current in the rotor (73) is electrically connected to current in the rotor winding (67) of the transformer, which is not electrically connected to, but is magnetically coupled to, a stator secondary winding (40). The stator secondary winding (40) is alternately connected to an effective resistance (41), an AC source inverter (42) or a magnetic switch (43) to provide a cost effective slip-energy-controlled, adjustable speed, induction motor that operates over a wide speed range from below synchronous speed to above synchronous speed based on the AC line frequency fed to the stator.
POLYSHIFT Communications Software for the Connection Machine System CM-200
George, William; Brickner, Ralph G.; Johnsson, S. Lennart
1994-01-01
We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less
2013-11-01
machine learning techniques used in BBAC to make predictions about the intent of actors establishing TCP connections and issuing HTTP requests. We discuss pragmatic challenges and solutions we encountered in implementing and evaluating BBAC, discussing (a) the general concepts underlying BBAC, (b) challenges we have encountered in identifying suitable datasets, (c) mitigation strategies to cope...and describe current plans for transitioning BBAC capabilities into the Department of Defense together with lessons learned for the machine learning
Design of electric control system for automatic vegetable bundling machine
NASA Astrophysics Data System (ADS)
Bao, Yan
2017-06-01
A design can meet the requirements of automatic bale food structure and has the advantages of simple circuit, and the volume is easy to enhance the electric control system of machine carrying bunch of dishes and low cost. The bundle of vegetable machine should meet the sensor to detect and control, in order to meet the control requirements; binding force can be adjusted by the button to achieve; strapping speed also can be adjusted, by the keys to set; sensors and mechanical line connection, convenient operation; can be directly connected with the plug, the 220V power supply can be connected to a power source; if, can work, by the transmission signal sensor, MCU to control the motor, drive and control procedures for small motor. The working principle of LED control circuit and temperature control circuit is described. The design of electric control system of automatic dish machine.
Close to real life. [solving for transonic flow about lifting airfoils using supercomputers
NASA Technical Reports Server (NTRS)
Peterson, Victor L.; Bailey, F. Ron
1988-01-01
NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.
Dynamic Optical Networks for Future Internet Environments
NASA Astrophysics Data System (ADS)
Matera, Francesco
2014-05-01
This article reports an overview on the evolution of the optical network scenario taking into account the exponential growth of connected devices, big data, and cloud computing that is driving a concrete transformation impacting the information and communication technology world. This hyper-connected scenario is deeply affecting relationships between individuals, enterprises, citizens, and public administrations, fostering innovative use cases in practically any environment and market, and introducing new opportunities and new challenges. The successful realization of this hyper-connected scenario depends on different elements of the ecosystem. In particular, it builds on connectivity and functionalities allowed by converged next-generation networks and their capacity to support and integrate with the Internet of Things, machine-to-machine, and cloud computing. This article aims at providing some hints of this scenario to contribute to analyze impacts on optical system and network issues and requirements. In particular, the role of the software-defined network is investigated by taking into account all scenarios regarding data centers, cloud computing, and machine-to-machine and trying to illustrate all the advantages that could be introduced by advanced optical communications.
Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883
Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán
2017-01-01
Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.
A Compatible Hardware/Software Reliability Prediction Model.
1981-07-22
machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed
Introduction to a system for implementing neural net connections on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.
Introduction to a system for implementing neural net connections on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized elements. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.
Graphical Modeling of Shipboard Electric Power Distribution Systems
1993-12-01
examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and the...frequency and voltage output of the machine. This load is then connected to the synchronous generator and two different scenarios are examined including a...examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and tht
A system for routing arbitrary directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1987-01-01
There are many problems which can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from connecting vertices. A method is given for parallelizing such problems on an SIMD machine model that is bit-serial and uses only nearest neighbor connections for communication. Each vertex of the graph will be assigned to a processor in the machine. Algorithms are given that will be used to implement movement of data along the arcs of the graph. This architecture and algorithms define a system that is relatively simple to build and can do graph processing. All arcs can be transversed in parallel in time O(T), where T is empirically proportional to the diameter of the interconnection network times the average degree of the graph. Modifying or adding a new arc takes the same time as parallel traversal.
On some methods of discrete systems behaviour simulation
NASA Astrophysics Data System (ADS)
Sytnik, Alexander A.; Posohina, Natalia I.
1998-07-01
The project is solving one of the fundamental problems of mathematical cybernetics and discrete mathematics, the one connected with synthesis and analysis of managing systems, depending on the research of their functional opportunities and reliable behaviour. This work deals with the case of finite-state machine behaviour restoration when the structural redundancy is not available and the direct updating of current behaviour is impossible. The described below method, uses number theory to build a special model of finite-state machine, it is simulating the transition between the states of the finite-state machine using specially defined functions of exponential type with the help of several methods of number theory and algebra it is easy to determine, whether there is an opportunity to restore the behaviour (with the help of this method) in the given case or not and also derive the class of finite-state machines, admitting such restoration.
Position feedback control system
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-01-01
Disclosed is a system and method for independently evaluating the spatial positional performance of a machine having a movable member, comprising an articulated coordinate measuring machine comprising: a first revolute joint; a probe arm, having a proximal end rigidly attached to the first joint, and having a distal end with a probe tip attached thereto, wherein the probe tip is pivotally mounted to the movable machine member; a second revolute joint; a first support arm serially connecting the first joint to the second joint; and coordinate processing means, operatively connected to the first and second revolute joints, for calculating the spatial coordinates of the probe tip; means for kinematically constraining the articulated coordinate measuring machine to a working surface; and comparator means, in operative association with the coordinate processing means and with the movable machine, for comparing the true position of the movable machine member, as measured by the true position of the probe tip, with the desired position of the movable machine member.
Identification of neural connectivity signatures of autism using machine learning
Deshpande, Gopikrishna; Libero, Lauren E.; Sreenivasan, Karthik R.; Deshpande, Hrishikesh D.; Kana, Rajesh K.
2013-01-01
Alterations in interregional neural connectivity have been suggested as a signature of the pathobiology of autism. There have been many reports of functional and anatomical connectivity being altered while individuals with autism are engaged in complex cognitive and social tasks. Although disrupted instantaneous correlation between cortical regions observed from functional MRI is considered to be an explanatory model for autism, the causal influence of a brain area on another (effective connectivity) is a vital link missing in these studies. The current study focuses on addressing this in an fMRI study of Theory-of-Mind (ToM) in 15 high-functioning adolescents and adults with autism and 15 typically developing control participants. Participants viewed a series of comic strip vignettes in the MRI scanner and were asked to choose the most logical end to the story from three alternatives, separately for trials involving physical and intentional causality. The mean time series, extracted from 18 activated regions of interest, were processed using a multivariate autoregressive model (MVAR) to obtain the causality matrices for each of the 30 participants. These causal connectivity weights, along with assessment scores, functional connectivity values, and fractional anisotropy obtained from DTI data for each participant, were submitted to a recursive cluster elimination based support vector machine classifier to determine the accuracy with which the classifier can predict a novel participant's group membership (autism or control). We found a maximum classification accuracy of 95.9% with 19 features which had the highest discriminative ability between the groups. All of the 19 features were effective connectivity paths, indicating that causal information may be critical in discriminating between autism and control groups. These effective connectivity paths were also found to be significantly greater in controls as compared to ASD participants and consisted predominantly of outputs from the fusiform face area and middle temporal gyrus indicating impaired connectivity in ASD participants, particularly in the social brain areas. These findings collectively point toward the fact that alterations in causal connectivity in the brain in ASD could serve as a potential non-invasive neuroimaging signature for autism. PMID:24151458
Reliability enumeration model for the gear in a multi-functional machine
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Ambarita, H.
2018-02-01
The angle and direction of motion play an important role in the ability of a multifunctional machine to be able to perform the task to be charged. The movement can be a rotational action that appears to perform a round, by which the rotation can be done by connecting the generator by hand through the help of a hinge formed from two rounded surfaces. The rotation of the entire arm can be carried out by the interconnection between two surfaces having a jagged ring. This link will change according to the angle of motion, and any yeast of the serration will have a share in the success of this process, therefore a robust hand measurement model is established based on canonical provisions.
Learning Extended Finite State Machines
NASA Technical Reports Server (NTRS)
Cassel, Sofia; Howar, Falk; Jonsson, Bengt; Steffen, Bernhard
2014-01-01
We present an active learning algorithm for inferring extended finite state machines (EFSM)s, combining data flow and control behavior. Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses the tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library.
Feder, Stephan; Sundermann, Benedikt; Wersching, Heike; Teuber, Anja; Kugel, Harald; Teismann, Henning; Heindel, Walter; Berger, Klaus; Pfleiderer, Bettina
2017-11-01
Combinations of resting-state fMRI and machine-learning techniques are increasingly employed to develop diagnostic models for mental disorders. However, little is known about the neurobiological heterogeneity of depression and diagnostic machine learning has mainly been tested in homogeneous samples. Our main objective was to explore the inherent structure of a diverse unipolar depression sample. The secondary objective was to assess, if such information can improve diagnostic classification. We analyzed data from 360 patients with unipolar depression and 360 non-depressed population controls, who were subdivided into two independent subsets. Cluster analyses (unsupervised learning) of functional connectivity were used to generate hypotheses about potential patient subgroups from the first subset. The relationship of clusters with demographical and clinical measures was assessed. Subsequently, diagnostic classifiers (supervised learning), which incorporated information about these putative depression subgroups, were trained. Exploratory cluster analyses revealed two weakly separable subgroups of depressed patients. These subgroups differed in the average duration of depression and in the proportion of patients with concurrently severe depression and anxiety symptoms. The diagnostic classification models performed at chance level. It remains unresolved, if subgroups represent distinct biological subtypes, variability of continuous clinical variables or in part an overfitting of sparsely structured data. Functional connectivity in unipolar depression is associated with general disease effects. Cluster analyses provide hypotheses about potential depression subtypes. Diagnostic models did not benefit from this additional information regarding heterogeneity. Copyright © 2017 Elsevier B.V. All rights reserved.
Jin, Seung-Hyun; Chung, Chun Kee
2017-01-01
The main aim of the present study was to evaluate whether resting-state functional connectivity of magnetoencephalography (MEG) signals can differentiate patients with mesial temporal lobe epilepsy (MTLE) from healthy controls (HC) and can differentiate between right and left MTLE as a diagnostic biomarker. To this end, a support vector machine (SVM) method among various machine learning algorithms was employed. We compared resting-state functional networks between 46 MTLE (right MTLE=23; left MTLE=23) patients with histologically proven HS who were free of seizure after surgery, and 46 HC. The optimal SVM group classifier distinguished MTLE patients with a mean accuracy of 95.1% (sensitivity=95.8%; specificity=94.3%). Increased connectivity including the right posterior cingulate gyrus and decreased connectivity including at least one sensory-related resting-state network were key features reflecting the differences between MTLE patients and HC. The optimal SVM model distinguished between right and left MTLE patients with a mean accuracy of 76.2% (sensitivity=76.0%; specificity=76.5%). We showed the potential of electrophysiological resting-state functional connectivity, which reflects brain network reorganization in MTLE patients, as a possible diagnostic biomarker to differentiate MTLE patients from HC and differentiate between right and left MTLE patients. Copyright © 2016 Elsevier B.V. All rights reserved.
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
Mathematical model of an air-filled alpha stirling refrigerator
NASA Astrophysics Data System (ADS)
McFarlane, Patrick; Semperlotti, Fabio; Sen, Mihir
2013-10-01
This work develops a mathematical model for an alpha Stirling refrigerator with air as the working fluid and will be useful in optimizing the mechanical design of these machines. Two pistons cyclically compress and expand air while moving sinusoidally in separate chambers connected by a regenerator, thus creating a temperature difference across the system. A complete non-linear mathematical model of the machine, including air thermodynamics, and heat transfer from the walls, as well as heat transfer and fluid resistance in the regenerator, is developed. Non-dimensional groups are derived, and the mathematical model is numerically solved. The heat transfer and work are found for both chambers, and the coefficient of performance of each chamber is calculated. Important design parameters are varied and their effect on refrigerator performance determined. This sensitivity analysis, which shows what the significant parameters are, is a useful tool for the design of practical Stirling refrigeration systems.
RISC-type microprocessors may revolutionize aerospace simulation
NASA Astrophysics Data System (ADS)
Jackson, Albert S.
The author explores the application of RISC (reduced instruction set computer) processors in massively parallel computer (MPC) designs for aerospace simulation. The MPC approach is shown to be well adapted to the needs of aerospace simulation. It is shown that any of the three common types of interconnection schemes used with MPCs are effective for general-purpose simulation, although the bus-or switch-oriented machines are somewhat easier to use. For partial differential equation models, the hypercube approach at first glance appears more efficient because the nearest-neighbor connections required for three-dimensional models are hardwired in a hypercube machine. However, the data broadcast ability of a bus system, combined with the fact that data can be transmitted over a bus as soon as it has been updated, makes the bus approach very competitive with the hypercube approach even for these types of models.
A fully programmable 100-spin coherent Ising machine with all-to-all connections
NASA Astrophysics Data System (ADS)
McMahon, Peter; Marandi, Alireza; Haribara, Yoshitaka; Hamerly, Ryan; Langrock, Carsten; Tamate, Shuhei; Inagaki, Takahiro; Takesue, Hiroki; Utsunomiya, Shoko; Aihara, Kazuyuki; Byer, Robert; Fejer, Martin; Mabuchi, Hideo; Yamamoto, Yoshihisa
We present a scalable optical processor with electronic feedback, based on networks of optical parametric oscillators. The design of our machine is inspired by adiabatic quantum computers, although it is not an AQC itself. Our prototype machine is able to find exact solutions of, or sample good approximate solutions to, a variety of hard instances of Ising problems with up to 100 spins and 10,000 spin-spin connections. This research was funded by the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) Program of the Council of Science, Technology and Innovation (Cabinet Office, Government of Japan).
ERIC Educational Resources Information Center
Smith, David Arthur
2010-01-01
Much recent work in natural language processing treats linguistic analysis as an inference problem over graphs. This development opens up useful connections between machine learning, graph theory, and linguistics. The first part of this dissertation formulates syntactic dependency parsing as a dynamic Markov random field with the novel…
A Structural Perspective on the Dynamics of Kinesin Motors
Hyeon, Changbong; Onuchic, José N.
2011-01-01
Despite significant fluctuation under thermal noise, biological machines in cells perform their tasks with exquisite precision. Using molecular simulation of a coarse-grained model and theoretical arguments, we envisaged how kinesin, a prototype of biological machines, generates force and regulates its dynamics to sustain persistent motor action. A structure-based model, which can be versatile in adapting its structure to external stresses while maintaining its native fold, was employed to account for several features of kinesin dynamics along the biochemical cycle. This analysis complements our current understandings of kinesin dynamics and connections to experiments. We propose a thermodynamic cycle for kinesin that emphasizes the mechanical and regulatory role of the neck linker and clarify issues related to the motor directionality, and the difference between the external stalling force and the internal tension responsible for the head-head coordination. The comparison between the thermodynamic cycle of kinesin and macroscopic heat engines highlights the importance of structural change as the source of work production in biomolecular machines. PMID:22261064
Erraguntla, Madhav; Zapletal, Josef; Lawley, Mark
2017-12-01
The impact of infectious disease on human populations is a function of many factors including environmental conditions, vector dynamics, transmission mechanics, social and cultural behaviors, and public policy. A comprehensive framework for disease management must fully connect the complete disease lifecycle, including emergence from reservoir populations, zoonotic vector transmission, and impact on human societies. The Framework for Infectious Disease Analysis is a software environment and conceptual architecture for data integration, situational awareness, visualization, prediction, and intervention assessment. Framework for Infectious Disease Analysis automatically collects biosurveillance data using natural language processing, integrates structured and unstructured data from multiple sources, applies advanced machine learning, and uses multi-modeling for analyzing disease dynamics and testing interventions in complex, heterogeneous populations. In the illustrative case studies, natural language processing from social media, news feeds, and websites was used for information extraction, biosurveillance, and situation awareness. Classification machine learning algorithms (support vector machines, random forests, and boosting) were used for disease predictions.
24 CFR 3280.607 - Plumbing fixtures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... two or more compartments, dishwashers, clothes washing machines, laundry tubs, bath tubs, and not less... for Safety Performance Specifications and Methods of Test for Safety Glazing Materials Used in...) Dishwashing machines. (i) A dishwashing machine shall not be directly connected to any waste piping, but shall...
Data Base Management: Proceedings of a Conference, November 1-2, 1984 Held at Monterey, California.
1985-07-31
Dolby Put the Information in the San Jose State University Database Not the Program San Jose , California 4:15 Douglas Lenat Relevance of Machine...network model permits multiple owners for one subsidi- ary entity. The DAPLEX network model includes the subset connection as well. I The SOCRATE system... Jose State University San Js, California -. A ..... .. .... [. . . ...- . . . - Js . . . .*es L * Dolby** PUT TIM INFORMATION IN THE DATABASE, NOT THE
Cheng, J C; Rogachov, A; Hemington, K S; Kucyi, A; Bosma, R L; Lindquist, M A; Inman, R D; Davis, K D
2018-04-26
Communication within the brain is dynamic. Chronic pain can also be dynamic, with varying intensities experienced over time. Little is known of how brain dynamics are disrupted in chronic pain, or relates to patients' pain assessed at various time-scales (e.g., short-term state versus long-term trait). Patients experience pain "traits" indicative of their general condition, but also pain "states" that vary day to day. Here, we used network-based multivariate machine learning to determine how patterns in dynamic and static brain communication are related to different characteristics and timescales of chronic pain. Our models were based on resting state dynamic and static functional connectivity (dFC, sFC) in patients with chronic neuropathic pain (NP) or non-NP. The most prominent networks in the models were the default mode, salience, and executive control networks. We also found that cross-network measures of dFC rather than sFC were better associated with patients' pain, but only in those with NP features. These associations were also more highly and widely associated with measures of trait rather than state pain. Furthermore, greater dynamic connectivity with executive control networks was associated with milder neuropathic pain, but greater dynamic connectivity with limbic networks was associated greater neuropathic pain. Compared with healthy individuals, the dFC features most highly related to trait neuropathic pain were also more abnormal in patients with greater pain. Our findings indicate that dFC reflects patients' overall pain condition (i.e., trait pain), not just their current state, and is impacted by complexities in pain features beyond intensity.
Zhou, Yongxia; Yu, Fang; Duong, Timothy
2014-01-01
This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.
Kajimura, Shogo; Kochiyama, Takanori; Abe, Nobuhito; Nomura, Michio
2018-04-21
The default mode network (DMN) is considered a unified core brain function for generating subjective mental experiences, such as mind wandering. We propose a novel cognitive framework for understanding the unity of the DMN from the perspective of hemispheric asymmetry. Using transcranial direct current stimulation (tDCS), effective connectivity estimation, and machine learning, we show that the bilateral angular gyri (AG), which are core regions of the DMN, exhibit heterogeneity in both inherent network organization and mind wandering regulation. Inherent heterogeneities are present between the right and left AG regarding not only effective connectivity, but also mind wandering regulation; the right AG is related to mind-wandering reduction, whereas the left AG is related to mind-wandering generation. Further supporting this observation, we found that only anodal tDCS of the right AG induced machine learning-detectable changes in effective connectivity and regional amplitude, which could possibly be linked to reduced mind wandering. Our findings highlight the importance of hemispheric asymmetry to further understand the function of the DMN and contribute to the emerging neural model of mind wandering, which is necessary to understand the nature of the human mind.
NASA Astrophysics Data System (ADS)
Ono, Satoru; Watanabe, Takashi
In recent years, the rapid progress in the development of hardware and software technologies enables tiny and low cost information devices hereinafter referred to as Machine to be widely available. M2M (Machine to Machine) has been of much attention where many tiny machines are connected to each other through networks with minimal human intervention to provide smooth and intelligent management. M2M is a promising core technology providing timely, flexible, efficient and comprehensive service at low cost. M2M has wide variety of applications including energy management system, environmental monitoring system, intelligent transport system, industrial automation system and other applications. M2M consists of terminals and networks that connect them. In this paper, we mainly focus on M2M networking and mention the future direction of the technology.
General Theory of the Double Fed Synchronous Machine. Ph.D. Thesis - Swiss Technological Univ., 1950
NASA Technical Reports Server (NTRS)
El-Magrabi, M. G.
1982-01-01
Motor and generator operation of a double-fed synchronous machine were studied and physically and mathematically treated. Experiments with different connections, voltages, etc. were carried out. It was concluded that a certain degree of asymmetry is necessary for the best utilization of the machine.
Flexible drive allows blind machining and welding in hard-to-reach areas
NASA Technical Reports Server (NTRS)
Harvey, D. E.; Rohrberg, R. G.
1966-01-01
Flexible power and control unit performs welding and machining operations in confined areas. A machine/weld head is connected to the unit by a flexible transmission shaft, and a locking- indexing collar is incorporated onto the head to allow it to be placed and held in position.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2017-12-01
Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.
17. Baltimore through truss steel bridge (1905), built by the ...
17. Baltimore through truss steel bridge (1905), built by the American Bridge Company. The bridge is 15 to 20 feet wide, with a wooden deck, and connects the Sullivan Machine Co. with the Foundry. The enclosed bridge in the background was constructed ca. 1920, and connects the Chain Machine Building with its power plant, foundry, and pattern shop. - Sullivan Machinery Company, Main Street between Pearl & Water Streets, Claremont, Sullivan County, NH
NASA Astrophysics Data System (ADS)
Shcherba, V. E.; Grigoriev, A. V.; Averyanov, G. S.; Surikov, V. I.; Vedruchenko, V. P.; Galdin, N. S.; Trukhanova, D. A.
2017-08-01
The article analyzes the impact of the connecting liquid pipe length and diameter on consumables and power characteristics of the piston hybrid power machine with gas suction capacity. The following operating characteristics of the machine were constructed and analyzed: the average height of the liquid column in the jacket space; instantaneous velocity and height of the liquid column in the jacket space; the relative height of the liquid column in the jacket space; volumetric efficiency; indicator isothermal efficiency; flowrate in the pump section; relative pressure losses during suction; relative flowrate. The dependence of the instantaneous pressure in the work space and the suction space of the compressor section on the rotation angle of the crankshaft is determined for different values of the length and diameter of the connecting pipeline.
30 CFR 56.13021 - High-pressure hose connections.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false High-pressure hose connections. 56.13021... and Boilers § 56.13021 High-pressure hose connections. Except where automatic shutoff valves are used, safety chains or other suitable locking devices shall be used at connections to machines of high-pressure...
Process-based tolerance assessment of connecting rod machining process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, P. Srinivasa; Surendra Babu, B.
2016-06-01
Process tolerancing based on the process capability studies is the optimistic and pragmatic approach of determining the manufacturing process tolerances. On adopting the define-measure-analyze-improve-control approach, the process potential capability index ( C p) and the process performance capability index ( C pk) values of identified process characteristics of connecting rod machining process are achieved to be greater than the industry benchmark of 1.33, i.e., four sigma level. The tolerance chain diagram methodology is applied to the connecting rod in order to verify the manufacturing process tolerances at various operations of the connecting rod manufacturing process. This paper bridges the gap between the existing dimensional tolerances obtained via tolerance charting and process capability studies of the connecting rod component. Finally, the process tolerancing comparison has been done by adopting a tolerance capability expert software.
Gray, John
2017-01-01
Machine-to-machine (M2M) communication is a key enabling technology for industrial internet of things (IIoT)-empowered industrial networks, where machines communicate with one another for collaborative automation and intelligent optimisation. This new industrial computing paradigm features high-quality connectivity, ubiquitous messaging, and interoperable interactions between machines. However, manufacturing IIoT applications have specificities that distinguish them from many other internet of things (IoT) scenarios in machine communications. By highlighting the key requirements and the major technical gaps of M2M in industrial applications, this article describes a collaboration-oriented M2M (CoM2M) messaging mechanism focusing on flexible connectivity and discovery, ubiquitous messaging, and semantic interoperability that are well suited for the production line-scale interoperability of manufacturing applications. The designs toward machine collaboration and data interoperability at both the communication and semantic level are presented. Then, the application scenarios of the presented methods are illustrated with a proof-of-concept implementation in the PicknPack food packaging line. Eventually, the advantages and some potential issues are discussed based on the PicknPack practice. PMID:29165347
MACHINE SHOP, WEST BAY, DETAIL OF COLUMN, BEAM, CRANE RAIL, ...
MACHINE SHOP, WEST BAY, DETAIL OF COLUMN, BEAM, CRANE RAIL, AND TRUSS CONNECTION TO ERECTING SHOP, LOOKING NORTHWEST. - Southern Pacific, Sacramento Shops, Erecting Shop, 111 I Street, Sacramento, Sacramento County, CA
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
Medlin, John B.
1976-05-25
A charging machine for loading fuel slugs into the process tubes of a nuclear reactor includes a tubular housing connected to the process tube, a charging trough connected to the other end of the tubular housing, a device for loading the charging trough with a group of fuel slugs, means for equalizing the coolant pressure in the charging trough with the pressure in the process tubes, means for pushing the group of fuel slugs into the process tube and a latch and a seal engaging the last object in the group of fuel slugs to prevent the fuel slugs from being ejected from the process tube when the pusher is removed and to prevent pressure liquid from entering the charging machine.
Graph Representations of Flow and Transport in Fracture Networks using Machine Learning
NASA Astrophysics Data System (ADS)
Srinivasan, G.; Viswanathan, H. S.; Karra, S.; O'Malley, D.; Godinez, H. C.; Hagberg, A.; Osthus, D.; Mohd-Yusof, J.
2017-12-01
Flow and transport of fluids through fractured systems is governed by the properties and interactions at the micro-scale. Retaining information about the micro-structure such as fracture length, orientation, aperture and connectivity in mesh-based computational models results in solving for millions to billions of degrees of freedom and quickly renders the problem computationally intractable. Our approach depicts fracture networks graphically, by mapping fractures to nodes and intersections to edges, thereby greatly reducing computational burden. Additionally, we use machine learning techniques to build simulators on the graph representation, trained on data from the mesh-based high fidelity simulations to speed up computation by orders of magnitude. We demonstrate our methodology on ensembles of discrete fracture networks, dividing up the data into training and validation sets. Our machine learned graph-based solvers result in over 3 orders of magnitude speedup without any significant sacrifice in accuracy.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity
Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd
2017-01-01
Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-08
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-01
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702
Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study.
Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank
2016-01-01
With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines.
Advice Taking from Humans and Machines: An fMRI and Effective Connectivity Study
Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank
2016-01-01
With new technological advances, advice can come from different sources such as machines or humans, but how individuals respond to such advice and the neural correlates involved need to be better understood. We combined functional MRI and multivariate Granger causality analysis with an X-ray luggage-screening task to investigate the neural basis and corresponding effective connectivity involved with advice utilization from agents framed as experts. Participants were asked to accept or reject good or bad advice from a human or machine agent with low reliability (high false alarm rate). We showed that unreliable advice decreased performance overall and participants interacting with the human agent had a greater depreciation of advice utilization during bad advice compared to the machine agent. These differences in advice utilization can be perceivably due to reevaluation of expectations arising from association of dispositional credibility for each agent. We demonstrated that differences in advice utilization engaged brain regions that may be associated with evaluation of personal characteristics and traits (precuneus, posterior cingulate cortex, temporoparietal junction) and interoception (posterior insula). We found that the right posterior insula and left precuneus were the drivers of the advice utilization network that were reciprocally connected to each other and also projected to all other regions. Our behavioral and neuroimaging results have significant implications for society because of progressions in technology and increased interactions with machines. PMID:27867351
Detection of inter-turn short-circuit at start-up of induction machine based on torque analysis
NASA Astrophysics Data System (ADS)
Pietrowski, Wojciech; Górny, Konrad
2017-12-01
Recently, interest in new diagnostics methods in a field of induction machines was observed. Research presented in the paper shows the diagnostics of induction machine based on torque pulsation, under inter-turn short-circuit, during start-up of a machine. In the paper three numerical techniques were used: finite element analysis, signal analysis and artificial neural networks (ANN). The elaborated numerical model of faulty machine consists of field, circuit and motion equations. Voltage excited supply allowed to determine the torque waveform during start-up. The inter-turn short-circuit was treated as a galvanic connection between two points of the stator winding. The waveforms were calculated for different amounts of shorted-turns from 0 to 55. Due to the non-stationary waveforms a wavelet packet decomposition was used to perform an analysis of the torque. The obtained results of analysis were used as input vector for ANN. The response of the neural network was the number of shorted-turns in the stator winding. Special attention was paid to compare response of general regression neural network (GRNN) and multi-layer perceptron neural network (MLP). Based on the results of the research, the efficiency of the developed algorithm can be inferred.
Finite element computation on nearest neighbor connected machines
NASA Technical Reports Server (NTRS)
Mcaulay, A. D.
1984-01-01
Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.
Direct generation of event-timing equations for generalized flow shop systems
NASA Astrophysics Data System (ADS)
Doustmohammadi, Ali; Kamen, Edward W.
1995-11-01
Flow shop production lines are very common in manufacturing systems such as car assemblies, manufacturing of electronic circuits, etc. In this paper, a systematic procedure is given for generating event-timing equations directly from the machine interconnections for a generalized flow shop system. The events considered here correspond to completion times of machine operations. It is assumed that the scheduling policy is cyclic (periodic). For a given flow shop system, the open connection dynamics of the machines are derived first. Then interconnection matrices characterizing the routing of parts in the system are obtained from the given system configuration. The open connection dynamics of the machines and the interconnection matrices are then combined together to obtain the overall system dynamics given by an equation of the form X(k+1) equals A(k)X(k) B(k)V(k+1) defined over the max-plus algebra. Here the state X(k) is the vector of completion times and V(k+1) is an external input vector consisting of the arrival times of parts. It is shown that if the machines are numbered in an appropriate way and the states are selected according to certain rules, the matrix A(k) will be in a special (canonical) form. The model obtained here is useful or the analysis of system behavior and for carrying out simulations. In particular, the canonical form of A(k) enables one to study system bottlenecks and the minimal cycle time during steady-state operation. The approach presented in this paper is believed to be more straightforward compared to existing max-plus algebra formulations of flow shop systems. In particular, three advantages of the proposed approach are: (1) it yields timing equations directly from the system configuration and hence there is no need to first derive a Petri net or a digraph equivalent of the system; (2) a change in the system configuration only affects the interconnection matrices and hence does not require rederiving the entire set of equations; (3) the system model is easily put into code using existing software packages such as MATLAB.
Minati, Ludovico; Cercignani, Mara; Chan, Dennis
2013-10-01
Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Fronto-Temporal Connectivity Predicts ECT Outcome in Major Depression.
Leaver, Amber M; Wade, Benjamin; Vasavada, Megha; Hellemann, Gerhard; Joshi, Shantanu H; Espinoza, Randall; Narr, Katherine L
2018-01-01
Electroconvulsive therapy (ECT) is arguably the most effective available treatment for severe depression. Recent studies have used MRI data to predict clinical outcome to ECT and other antidepressant therapies. One challenge facing such studies is selecting from among the many available metrics, which characterize complementary and sometimes non-overlapping aspects of brain function and connectomics. Here, we assessed the ability of aggregated, functional MRI metrics of basal brain activity and connectivity to predict antidepressant response to ECT using machine learning. A radial support vector machine was trained using arterial spin labeling (ASL) and blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) metrics from n = 46 (26 female, mean age 42) depressed patients prior to ECT (majority right-unilateral stimulation). Image preprocessing was applied using standard procedures, and metrics included cerebral blood flow in ASL, and regional homogeneity, fractional amplitude of low-frequency modulations, and graph theory metrics (strength, local efficiency, and clustering) in BOLD data. A 5-repeated 5-fold cross-validation procedure with nested feature-selection validated model performance. Linear regressions were applied post hoc to aid interpretation of discriminative features. The range of balanced accuracy in models performing statistically above chance was 58-68%. Here, prediction of non-responders was slightly higher than for responders (maximum performance 74 and 64%, respectively). Several features were consistently selected across cross-validation folds, mostly within frontal and temporal regions. Among these were connectivity strength among: a fronto-parietal network [including left dorsolateral prefrontal cortex (DLPFC)], motor and temporal networks (near ECT electrodes), and/or subgenual anterior cingulate cortex (sgACC). Our data indicate that pattern classification of multimodal fMRI metrics can successfully predict ECT outcome, particularly for individuals who will not respond to treatment. Notably, connectivity with networks highly relevant to ECT and depression were consistently selected as important predictive features. These included the left DLPFC and the sgACC, which are both targets of other neurostimulation therapies for depression, as well as connectivity between motor and right temporal cortices near electrode sites. Future studies that probe additional functional and structural MRI metrics and other patient characteristics may further improve the predictive power of these and similar models.
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1989-01-01
In the present technological society, there is a major need to build machines that would execute intelligent tasks operating in uncertain environments with minimum interaction with a human operator. Although some designers have built smart robots, utilizing heuristic ideas, there is no systematic approach to design such machines in an engineering manner. Recently, cross-disciplinary research from the fields of computers, systems AI and information theory has served to set the foundations of the emerging area of the design of intelligent machines. Since 1977 Saridis has been developing an approach, defined as Hierarchical Intelligent Control, designed to organize, coordinate and execute anthropomorphic tasks by a machine with minimum interaction with a human operator. This approach utilizes analytical (probabilistic) models to describe and control the various functions of the intelligent machine structured by the intuitively defined principle of Increasing Precision with Decreasing Intelligence (IPDI) (Saridis 1979). This principle, even though resembles the managerial structure of organizational systems (Levis 1988), has been derived on an analytic basis by Saridis (1988). The purpose is to derive analytically a Boltzmann machine suitable for optimal connection of nodes in a neural net (Fahlman, Hinton, Sejnowski, 1985). Then this machine will serve to search for the optimal design of the organization level of an intelligent machine. In order to accomplish this, some mathematical theory of the intelligent machines will be first outlined. Then some definitions of the variables associated with the principle, like machine intelligence, machine knowledge, and precision will be made (Saridis, Valavanis 1988). Then a procedure to establish the Boltzmann machine on an analytic basis will be presented and illustrated by an example in designing the organization level of an Intelligent Machine. A new search technique, the Modified Genetic Algorithm, is presented and proved to converge to the minimum of a cost function. Finally, simulations will show the effectiveness of a variety of search techniques for the intelligent machine.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
Torque-balanced vibrationless rotary coupling
Miller, Donald M.
1980-01-01
This disclosure describes a torque-balanced vibrationless rotary coupling for transmitting rotary motion without unwanted vibration into the spindle of a machine tool. A drive member drives a driven member using flexible connecting loops which are connected tangentially and at diametrically opposite connecting points through a free floating ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukami, Tadashi; Imamura, Michinori; Kaburaki, Yuichi
1995-12-31
A new single-phase capacitor self-excited induction generator with self-regulating feature is presented. The new generator consists of a squirrel cage three-phase induction machine and three capacitors connected in series and parallel with a single phase load. The voltage regulation of this generator is very small due to the effect of the three capacitors. Moreover, since a Y-connected stator winding is employed, the waveform of the output voltage becomes sinusoidal. In this paper the system configuration and the operating principle of the new generator are explained, and the basic characteristics are also investigated by means of a simple analysis and experimentsmore » with a laboratory machine.« less
Force reflecting hand controller
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A. (Inventor); Snow, Edward R. (Inventor); Townsend, William T. (Inventor)
1993-01-01
A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs.
Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.
Siettos, Constantinos; Starke, Jens
2016-09-01
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Solving Navier-Stokes equations on a massively parallel processor; The 1 GFLOP performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saati, A.; Biringen, S.; Farhat, C.
This paper reports on experience in solving large-scale fluid dynamics problems on the Connection Machine model CM-2. The authors have implemented a parallel version of the MacCormack scheme for the solution of the Navier-Stokes equations. By using triad floating point operations and reducing the number of interprocessor communications, they have achieved a sustained performance rate of 1.42 GFLOPS.
Tolerance measurements on internal- and external-hexagon implants.
Braian, Michael; De Bruyn, Hugo; Fransson, Håkan; Christersson, Cecilia; Wennerberg, Ann
2014-01-01
To measure the horizontal machining tolerances of the interface between internal- and external-hexagon implants and analogs with corresponding components after delivery from the manufacturer. These values may be a valuable tool for evaluating increasing misfit caused by fabrication, processing, and wear. Seven implants and seven analogs with external- and internal-hexagon connections (Biomet 3i) with corresponding prefabricated gold cylinders and gold screws, prefabricated cylindric plastic cylinders, and laboratory screws were studied. One set of components from the external and internal groups was measured manually and digitally. Measurements from the test subjects were compared with identical measurements from the virtual model to obtain threshold values. The virtual model was then used to obtain optimally oriented cuts. The horizontal machining tolerances for castable plastic abutments on external implants were 12 ± 89 μm, and for internal implants they were 86 ± 47 μm. Tolerance measurements on prefabricated gold abutments for external implants were 44 ± 9 μm, and for internal implants they were 58 ± 28 μm. The groups with metallic components showed the smallest tolerance at < 50 μm for the external group and < 90 μm for the internal group. The prefabricated plastic cylinder groups ranged from < 100 μm for external and < 130 μm for internal connection.
A formal protocol test procedure for the Survivable Adaptable Fiber Optic Embedded Network (SAFENET)
NASA Astrophysics Data System (ADS)
High, Wayne
1993-03-01
This thesis focuses upon a new method for verifying the correct operation of a complex, high speed fiber optic communication network. These networks are of growing importance to the military because of their increased connectivity, survivability, and reconfigurability. With the introduction and increased dependence on sophisticated software and protocols, it is essential that their operation be correct. Because of the speed and complexity of fiber optic networks being designed today, they are becoming increasingly difficult to test. Previously, testing was accomplished by application of conformance test methods which had little connection with an implementation's specification. The major goal of conformance testing is to ensure that the implementation of a profile is consistent with its specification. Formal specification is needed to ensure that the implementation performs its intended operations while exhibiting desirable behaviors. The new conformance test method presented is based upon the System of Communicating Machine model which uses a formal protocol specification to generate a test sequence. The major contribution of this thesis is the application of the System of Communicating Machine model to formal profile specifications of the Survivable Adaptable Fiber Optic Embedded Network (SAFENET) standard which results in the derivation of test sequences for a SAFENET profile. The results applying this new method to SAFENET's OSI and Lightweight profiles are presented.
Learning pattern recognition and decision making in the insect brain
NASA Astrophysics Data System (ADS)
Huerta, R.
2013-01-01
We revise the current model of learning pattern recognition in the Mushroom Bodies of the insects using current experimental knowledge about the location of learning, olfactory coding and connectivity. We show that it is possible to have an efficient pattern recognition device based on the architecture of the Mushroom Bodies, sparse code, mutual inhibition and Hebbian leaning only in the connections from the Kenyon cells to the output neurons. We also show that despite the conventional wisdom that believes that artificial neural networks are the bioinspired model of the brain, the Mushroom Bodies actually resemble very closely Support Vector Machines (SVMs). The derived SVM learning rules are situated in the Mushroom Bodies, are nearly identical to standard Hebbian rules, and require inhibition in the output. A very particular prediction of the model is that random elimination of the Kenyon cells in the Mushroom Bodies do not impair the ability to recognize odorants previously learned.
Machine learning and social network analysis applied to Alzheimer's disease biomarkers.
Di Deco, Javier; González, Ana M; Díaz, Julia; Mato, Virginia; García-Frank, Daniel; Álvarez-Linera, Juan; Frank, Ana; Hernández-Tamames, Juan A
2013-01-01
Due to the fact that the number of deaths due Alzheimer is increasing, the scientists have a strong interest in early stage diagnostic of this disease. Alzheimer's patients show different kind of brain alterations, such as morphological, biochemical, functional, etc. Currently, using magnetic resonance imaging techniques is possible to obtain a huge amount of biomarkers; being difficult to appraise which of them can explain more properly how the pathology evolves instead of the normal ageing. Machine Learning methods facilitate an efficient analysis of complex data and can be used to discover which biomarkers are more informative. Moreover, automatic models can learn from historical data to suggest the diagnostic of new patients. Social Network Analysis (SNA) views social relationships in terms of network theory consisting of nodes and connections. The resulting graph-based structures are often very complex; there can be many kinds of connections between the nodes. SNA has emerged as a key technique in modern sociology. It has also gained a significant following in medicine, anthropology, biology, information science, etc., and has become a popular topic of speculation and study. This paper presents a review of machine learning and SNA techniques and then, a new approach to analyze the magnetic resonance imaging biomarkers with these techniques, obtaining relevant relationships that can explain the different phenotypes in dementia, in particular, different stages of Alzheimer's disease.
Embedded control system for computerized franking machine
NASA Astrophysics Data System (ADS)
Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.
2007-12-01
This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.
TELNET under Single-Connection TCP Specification
1976-02-02
Manager User Oriented Systems International Business Machines Corp. K54-282, Monterey and Cottle Roads San Jose, CA 95193 Dr. Leonard Y. Liu...Manager Computer Science International Business Machines Corp. K51-282, Monterey and Cottle Roads San Jose, CA 95193 Mr. Harry Reinstein... International Business Machines Corp. 1501 California Avenue Palo Alto, Ca 94303 Illinois, University of Mr. John D. Day University of Illinois Center for
17. TRACTOR ENGINE POWERING SHAFT SYSTEM IN FOREGROUND, BELT CONNECTS ...
17. TRACTOR ENGINE POWERING SHAFT SYSTEM IN FOREGROUND, BELT CONNECTS WITH MAIN SHAFT LOOKING EAST. - W. A. Young & Sons Foundry & Machine Shop, On Water Street along Monongahela River, Rices Landing, Greene County, PA
Shedding Light on Synergistic Chemical Genetic Connections with Machine Learning.
Ekins, Sean; Siqueira-Neto, Jair Lage
2015-12-23
Machine learning can be used to predict compounds acting synergistically, and this could greatly expand the universe of available potential treatments for diseases that are currently hidden in the dark chemical matter. Copyright © 2015 Elsevier Inc. All rights reserved.
Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data
2013-01-01
Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622
Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models
NASA Astrophysics Data System (ADS)
Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro
2017-10-01
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here, we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Parallel Processing and Scientific Applications
1992-11-30
Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys
Introduction to Parallel Computing
1992-05-01
Instruction Stream, Multiple Data Stream Machines .................... 19 2.4 Networks of M achines...independent memory units and connecting them to the processors by an interconnection network . Many different interconnection schemes have been considered, and...connected to the same processor at the same time. Crossbar switching networks are still too expensive to be practical for connecting large numbers of
Causal network in a deafferented non-human primate brain.
Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G
2015-01-01
De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).
UltraNet Target Parameters. Chapter 1
NASA Technical Reports Server (NTRS)
Kislitzin, Katherine T.; Blaylock, Bruce T. (Technical Monitor)
1992-01-01
The UltraNet is a high speed network capable of rates up to one gigabit per second. It is a hub based network with four optical fiber links connecting each hub. Each link can carry up to 256 megabits of data, and the hub backplane is capable of one gigabit aggregate throughput. Host connections to the hub may be fiber, coax, or channel based. Bus based machines have adapter boards that connect to transceivers in the hub, while channel based machines use a personality module in the hub. One way that the UltraNet achieves its high transfer rates is by off-loading the protocol processing from the hosts to special purpose protocol engines in the UltraNet hubs. In addition, every hub has a PC connected to it by StarLAN for network management purposes. Although there is hub resident and PC resident UltraNet software, this document treats only the host resident UltraNet software.
Computer network defense system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urias, Vincent; Stout, William M. S.; Loverro, Caleb
A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less
A review of supervised machine learning applied to ageing research.
Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A
2017-04-01
Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.
Switching Circuit for Shop Vacuum System
NASA Technical Reports Server (NTRS)
Burley, R. K.
1987-01-01
No internal connections to machine tools required. Switching circuit controls vacuum system draws debris from grinders and sanders in machine shop. Circuit automatically turns on vacuum system whenever at least one sander or grinder operating. Debris safely removed, even when operator neglects to turn on vacuum system manually. Pickup coils sense alternating magnetic fields just outside operating machines. Signal from any coil or combination of coils causes vacuum system to be turned on.
2001-09-01
testing is performed between two machines connected by either a 100 Mbps Ethernet connection or a 56K modem connection. This testing is performed...and defined as follows: • The available bandwidth is set at two different levels (Ethernet 100 Mbps and 56K modem ). 32 • The packet size is set... modem connection. These two connections represent the target 100 Mbps high end and 56k bps low end of anticipated client connections in web-based
Forecasting of Machined Surface Waviness on the Basis of Self-oscillations Analysis
NASA Astrophysics Data System (ADS)
Belov, E. B.; Leonov, S. L.; Markov, A. M.; Sitnikov, A. A.; Khomenko, V. A.
2017-01-01
The paper states a problem of providing quality of geometrical characteristics of machined surfaces, which makes it necessary to forecast the occurrence and amount of oscillations appearing in the course of mechanical treatment. Objectives and tasks of the research are formulated. Sources of oscillation onset are defined: these are coordinate connections and nonlinear dependence of cutting force on the cutting velocity. A mathematical model of forecasting steady-state self-oscillations is investigated. The equation of the cutter tip motion is a system of two second-order nonlinear differential equations. The paper shows an algorithm describing a harmonic linearization method which allows for a significant reduction of the calculation time. In order to do that it is necessary to determine the amplitude of oscillations, frequency and a steady component of the first harmonic. Software which allows obtaining data on surface waviness parameters is described. The paper studies an example of the use of the developed model in semi-finished lathe machining of the shaft made from steel 40H which is a part of the BelAZ wheel electric actuator unit. Recommendations on eliminating self-oscillations in the process of shaft cutting and defect correction of the surface waviness are given.
Encoding the local connectivity patterns of fMRI for cognitive task and state classification.
Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T
2018-06-15
In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.
Kang, Byeong Keun; Kim, June Sic; Ryun, Seokyun; Chung, Chun Kee
2018-01-01
Most brain-machine interface (BMI) studies have focused only on the active state of which a BMI user performs specific movement tasks. Therefore, models developed for predicting movements were optimized only for the active state. The models may not be suitable in the idle state during resting. This potential maladaptation could lead to a sudden accident or unintended movement resulting from prediction error. Prediction of movement intention is important to develop a more efficient and reasonable BMI system which could be selectively operated depending on the user's intention. Physical movement is performed through the serial change of brain states: idle, planning, execution, and recovery. The motor networks in the primary motor cortex and the dorsolateral prefrontal cortex are involved in these movement states. Neuronal communication differs between the states. Therefore, connectivity may change depending on the states. In this study, we investigated the temporal dynamics of connectivity in dorsolateral prefrontal cortex and primary motor cortex to predict movement intention. Movement intention was successfully predicted by connectivity dynamics which may reflect changes in movement states. Furthermore, dorsolateral prefrontal cortex is crucial in predicting movement intention to which primary motor cortex contributes. These results suggest that brain connectivity is an excellent approach in predicting movement intention.
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Unsupervised classification of major depression using functional connectivity MRI.
Zeng, Ling-Li; Shen, Hui; Liu, Li; Hu, Dewen
2014-04-01
The current diagnosis of psychiatric disorders including major depressive disorder based largely on self-reported symptoms and clinical signs may be prone to patients' behaviors and psychiatrists' bias. This study aims at developing an unsupervised machine learning approach for the accurate identification of major depression based on single resting-state functional magnetic resonance imaging scans in the absence of clinical information. Twenty-four medication-naive patients with major depression and 29 demographically similar healthy individuals underwent resting-state functional magnetic resonance imaging. We first clustered the voxels within the perigenual cingulate cortex into two subregions, a subgenual region and a pregenual region, according to their distinct resting-state functional connectivity patterns and showed that a maximum margin clustering-based unsupervised machine learning approach extracted sufficient information from the subgenual cingulate functional connectivity map to differentiate depressed patients from healthy controls with a group-level clustering consistency of 92.5% and an individual-level classification consistency of 92.5%. It was also revealed that the subgenual cingulate functional connectivity network with the highest discriminative power primarily included the ventrolateral and ventromedial prefrontal cortex, superior temporal gyri and limbic areas, indicating that these connections may play critical roles in the pathophysiology of major depression. The current study suggests that subgenual cingulate functional connectivity network signatures may provide promising objective biomarkers for the diagnosis of major depression and that maximum margin clustering-based unsupervised machine learning approaches may have the potential to inform clinical practice and aid in research on psychiatric disorders. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Golikov, N. S.; Timofeev, I. P.
2018-05-01
Efficiency increase of jaw crushers makes the foundation of rational kinematics and stiffening of the elements of the machine possible. Foundation of rational kinematics includes establishment of connection between operation mode parameters of the crusher and its technical characteristics. The main purpose of this research is just to establish such a connection. Therefore this article shows analytical procedure of getting connection between operation mode parameters of the crusher and its capacity. Theoretical, empirical and semi-empirical methods of capacity determination of a single-toggle jaw crusher are given, taking into account physico-mechanical properties of crushed material and kinematics of the working mechanism. When developing a mathematical model, the method of closed vector polygons by V. A. Zinoviev was used. The expressions obtained in the article give an opportunity to solve important scientific and technical problems, connected with finding the rational kinematics of the jaw crusher mechanism, carrying out a comparative assessment of different crushers and giving the recommendations about updating the available jaw crushers.
``Diagonalization'' of a compound Atwood machine
NASA Astrophysics Data System (ADS)
Crawford, Frank S.
1987-06-01
We consider a simple Atwood machine consisting of a massless frictionless pulley no. 0 supporting two masses m1 and m2 connected by a massless flexible string. We show that the string that supports massless pulley no. 0 ``thinks'' it is simply supporting a mass m0, with m0=4m1m2/(m1+m2). This result, together with Einstein's equivalence principle, allows us to solve easily those compound Atwood machines created by replacing one or both of m1 and m2 in machine no. 0 by an Atwood machine. We may then replacing the masses in these new machines by machines, etc. The complete solution can be written down immediately, without solving simultaneous equations. Finally we give the effective mass of an Atwood machine whose pulley has nonzero mass and moment of inertia.
Unsupervised machine learning account of magnetic transitions in the Hubbard model
NASA Astrophysics Data System (ADS)
Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan
2018-01-01
We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.
Hanlon, John A.; Gill, Timothy J.
2001-01-01
Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent
Virtual terrain: a security-based representation of a computer network
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Argauer, Brian
2008-03-01
Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.
Near-Death Experience in Patients on Hemodialysis.
Johnson, Sharona
2015-01-01
Near-death experience (NDE) is a phenomenon that occurs when a person loses consciousness and senses a disconnection from the world around them. Patients on hemodialysis can experience multiple NDEs over their lifetime. An NDE during a hemodialysis session while connected to a hemodialysis machine can present challenges to this patient population and the nurses caring for them. The purpose of this article is to discuss the potential after effects of NDE in patients who experience this phenomenon while connected to a hemodialysis machine and to propose that nurses lead the healthcare team in addressing the after effects of NDE in patients on hemodialysis.
A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1994-01-01
We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Sakurai, Tetsuharu; Tanaka, Kazuyuki
Restricted Boltzmann machines (RBMs) are bipartite structured statistical neural networks and consist of two layers. One of them is a layer of visible units and the other one is a layer of hidden units. In each layer, any units do not connect to each other. RBMs have high flexibility and rich structure and have been expected to applied to various applications, for example, image and pattern recognitions, face detections and so on. However, most of computational models in RBMs are intractable and often belong to the class of NP-hard problem. In this paper, in order to construct a practical learning algorithm for them, we employ the Kullback-Leibler Importance Estimation Procedure (KLIEP) to RBMs, and give a new scheme of practical approximate learning algorithm for RBMs based on the KLIEP.
Reviewing the connection between speech and obstructive sleep apnea.
Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T; Alcázar-Ramírez, José D; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A
2016-02-20
Sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). The altered UA structure or function in OSA speakers has led to hypothesize the automatic analysis of speech for OSA assessment. In this paper we critically review several approaches using speech analysis and machine learning techniques for OSA detection, and discuss the limitations that can arise when using machine learning techniques for diagnostic applications. A large speech database including 426 male Spanish speakers suspected to suffer OSA and derived to a sleep disorders unit was used to study the clinical validity of several proposals using machine learning techniques to predict the apnea-hypopnea index (AHI) or classify individuals according to their OSA severity. AHI describes the severity of patients' condition. We first evaluate AHI prediction using state-of-the-art speaker recognition technologies: speech spectral information is modelled using supervectors or i-vectors techniques, and AHI is predicted through support vector regression (SVR). Using the same database we then critically review several OSA classification approaches previously proposed. The influence and possible interference of other clinical variables or characteristics available for our OSA population: age, height, weight, body mass index, and cervical perimeter, are also studied. The poor results obtained when estimating AHI using supervectors or i-vectors followed by SVR contrast with the positive results reported by previous research. This fact prompted us to a careful review of these approaches, also testing some reported results over our database. Several methodological limitations and deficiencies were detected that may have led to overoptimistic results. The methodological deficiencies observed after critically reviewing previous research can be relevant examples of potential pitfalls when using machine learning techniques for diagnostic applications. We have found two common limitations that can explain the likelihood of false discovery in previous research: (1) the use of prediction models derived from sources, such as speech, which are also correlated with other patient characteristics (age, height, sex,…) that act as confounding factors; and (2) overfitting of feature selection and validation methods when working with a high number of variables compared to the number of cases. We hope this study could not only be a useful example of relevant issues when using machine learning for medical diagnosis, but it will also help in guiding further research on the connection between speech and OSA.
New technologies of mining stratal minerals and their computation
NASA Astrophysics Data System (ADS)
Beysembayev, K. M.; Reshetnikova, O. S.; Nokina, Z. N.; Teliman, I. V.; Asmagambet, D. K.
2018-03-01
The paper considers the systems of flat and volumetric modeling of controlling long-wall faces for schemes with rock collapse of the immediate and main roof and smooth lowering of the remaining layers, as well as in forming a vault over the face. Stress distributions are obtained for the reference pressure zone. They are needed for recognizing the active state of the long-wall face in the feedback mode. The project of the system “support - lateral rocks” is represented by a multidimensional network base. Its connections reflect the elements of the system or rocks, workings, supports with nodes and parts. The connections reflect the logic of the operation of machines, assemblies and parts, and the types of their mechanical connections. At the nodes of the base, there are built-in systems of object-oriented programming languages. This allows combining spatial elements of the system into a simple neural network.
Homopolar Transformer for Conversion of Electrical Energy
1998-10-13
electrical current Hows through a conductor situated in a magnetic field during rotation of the machine rotor. In L the case of a homopolar motor ...10, incorporated within a homopolar machine 12 corresponding for example to the motor or generator disclosed in U.S. Pat. No. 3,657,580 to Doyle. The...During operation of the homopolar machine 12 as a motor , a voltage source 16 connected to the stator terminals 26 and 28 causes a current to flow
Homopolar Transformer for Conversion of Electrical Energy
1997-08-14
machine rotor. In the case of a 14 homopolar motor , the current will develop a force perpendicular to the direction of its flow 15 through the conductor...reference numeral 10, incorporated within a homopolar 14 machine 12 corresponding for example to the motor or generator disclosed in U.S. Patent No...current flow. During 3 operation of the homopolar machine 12 as a motor , a voltage source 16 connected to the stator 5 terminals 26 and 28 causes a
TRUFLO GONDOLA, USED WITH THE HUNTER 10 MOLDING MACHINE, OPERATES ...
TRUFLO GONDOLA, USED WITH THE HUNTER 10 MOLDING MACHINE, OPERATES THE SAME AS THE TWO LARGER TRUFLOS USED IN CONJUNCTION WITH THE TWO HUNTER 20S. EACH GONDOLA IS CONNECTED TO THE NEXT AND RIDES ON A SINGLE TRACK RAIL FROM MOLDING MACHINES THROUGH POURING AREAS CARRYING A MOLD AROUND TWICE BEFORE THE MOLD IS PUSHED OFF ONTO A VIBRATING SHAKEOUT CONVEYOR. - Southern Ductile Casting Company, Casting, 2217 Carolina Avenue, Bessemer, Jefferson County, AL
Ahmad-Sabry, Mohammad H I
2015-04-01
During 6 weeks, we had 4 incidents of echocardiography machine malfunction. There were 3 in the operating room, which were damaged due to intravenous (IV) fluid spillage over the keyboard of the machine leading to burning of the keyboard electric connection, and 1 in the cardiology department, which was damagaed due to spillage of coffee on it. The malfunction had an economic impact on the hospital (about $ 20,000) in addition to the nonavailability of the ultrasound (US) machine for the cardiac patient after the incident till the end of the case and for consequent cases till the fixation of the machine. We undertook an analysis of the incidents using simplified approach. The first incident happened when changing an empty IV fluid bag for a full one led to spillage of some fluid onto the keyboard. The second incidence was due to the use of needle to depressurize a medication bottle for continuous IV drip, and the third event was due to disconnection of the IV set from the bottle during transfer of the patient from operation room to intensive care unit. The fundamental problem is of course that fluid is harmful to the US machine. In addition, the machines are in a position between the patient bed and anesthesia machine. This means that IV pulls are on each side of the patient bed, which makes the machine vulnerable to fluid spillage. We considered a machine modification, to create a protective cover, but this was hindered by complexity of keyboard of the US machine, technical and financial challenges, and the time it would take to achieve. Second, we considered the creation of a protocol, with putting the machine in a position where no IV pulls are around and transferring the machine out of the room when transferring the patient will endanger the machine by the IV fluid. Third, changing of human behavior; to do this, we announced the protocol in our anesthesia conference to make it known to each and every one. We taught residents, fellows, and staff about the new protocol.Our simplified approach was effective for the prevention of fluid spillage over the US machine.
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-15
... more of the following components: A valve body, field connection tube, factory connection tube or valve charge port. The valve body is a rectangular block, or brass forging, machined to be hollow in the interior, with a generally square shaped seat (bottom of body). The field connection tube and factory...
Computational neuroanatomy: ontology-based representation of neural components and connectivity.
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-02-05
A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.
An artificial neural network model for periodic trajectory generation
NASA Astrophysics Data System (ADS)
Shankar, S.; Gander, R. E.; Wood, H. C.
A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.
Teaching about the U.S. Constitution through Metaphor: Government as a Machine.
ERIC Educational Resources Information Center
Mills, Randy K.
1988-01-01
Briefly reviews theories of brain hemisphere functions and draws implications for social studies instruction. Maintains that the metaphor aids the development of understanding because it connects right and left brain functions. Provides a learning activity based on the metaphor of the U.S. government functioning as a machine. (BSR)
30 CFR 18.48 - Circuit-interrupting devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... two-pole switch of the “dead-man-control” type that must be held closed by hand and will open when hand pressure is released. (e) A machine designed to operate from both trolley wire and portable cable.... Such a switch shall be designed to prevent electrical connection to the machine frame when the cable is...
30 CFR 18.48 - Circuit-interrupting devices.
Code of Federal Regulations, 2013 CFR
2013-07-01
... two-pole switch of the “dead-man-control” type that must be held closed by hand and will open when hand pressure is released. (e) A machine designed to operate from both trolley wire and portable cable.... Such a switch shall be designed to prevent electrical connection to the machine frame when the cable is...
ERIC Educational Resources Information Center
Thomas, Joan
"Choosing the Future: College Students' Projections of Their Personal Life Patterns" is a machine-readable data file (MRDF) prepared by the principal investigator in connection with her doctoral program studies and her 1986 unpublished doctoral dissertation prepared in the Department of Psychology at the University of Cincinnati. The…
Machinic Assemblages: Women, Art Education and Space
ERIC Educational Resources Information Center
Tamboukou, Maria
2008-01-01
In this paper I explore connections between women, art education and spatial relations drawing on the Deleuzo-Guattarian concept of "machinic assemblage" as a useful analytical tool for making sense of the heterogeneity and meshwork of life narratives and their social milieus. In focusing on Mary Bradish Titcomb, a fin-de-siecle Bostonian woman…
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
Predicting pork loin intramuscular fat using computer vision system.
Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J
2018-09-01
The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
Winding Schemes for Wide Constant Power Range of Double Stator Transverse Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hassan, Iftekhar; Sozer, Yilmaz
2015-05-01
Different ring winding schemes for double sided transverse flux machines are investigated in this paper for wide speed operation. The windings under investigation are based on two inverters used in parallel. At higher power applications this arrangement improves the drive efficiency. The new winding structure through manipulation of the end connection splits individual sets into two and connects the partitioned turns from individual stator sets in series. This configuration offers the flexibility of torque profiling and a greater flux weakening region. At low speeds and low torque only one winding set is capable of providing the required torque thus providingmore » greater fault tolerance. At higher speeds one set is dedicated to torque production and the other for flux control. The proposed method improves the machine efficiency and allows better flux weakening which is desirable for traction applications.« less
Modelling psychiatric and cultural possession phenomena with suggestion and fMRI.
Deeley, Quinton; Oakley, David A; Walsh, Eamonn; Bell, Vaughan; Mehta, Mitul A; Halligan, Peter W
2014-04-01
Involuntary movements occur in a variety of neuropsychiatric disorders and culturally influenced dissociative states (e.g., delusions of alien control and attributions of spirit possession). However, the underlying brain processes are poorly understood. We combined suggestion and fMRI in 15 highly hypnotically susceptible volunteers to investigate changes in brain activity accompanying different experiences of loss of self-control of movement. Suggestions of external personal control and internal personal control over involuntary movements modelled delusions of control and spirit possession respectively. A suggestion of impersonal control by a malfunctioning machine modelled technical delusions of control, where involuntary movements are attributed to the influence of machines. We found that (i) brain activity and/or connectivity significantly varied with different experiences and attributions of loss of agency; (ii) compared to the impersonal control condition, both external and internal personal alien control were associated with increased connectivity between primary motor cortex (M1) and brain regions involved in attribution of mental states and representing the self in relation to others; (iii) compared to both personal alien control conditions, impersonal control of movement was associated with increased activity in brain regions involved in error detection and object imagery; (iv) there were no significant differences in brain activity, and minor differences in M1 connectivity, between the external and internal personal alien control conditions. Brain networks supporting error detection and object imagery, together with representation of self and others, are differentially recruited to support experiences of impersonal and personal control of involuntary movements. However, similar brain systems underpin attributions and experiences of external and internal alien control of movement. Loss of self-agency for movement can therefore accompany different kinds of experience of alien control supported by distinct brain mechanisms. These findings caution against generalization about single cognitive processes or brain systems underpinning different experiences of loss of self-control of movement. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivak, David; Crooks, Gavin
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Access and Interoperability
NASA Astrophysics Data System (ADS)
Fan, D.; He, B.; Xiao, J.; Li, S.; Li, C.; Cui, C.; Yu, C.; Hong, Z.; Yin, S.; Wang, C.; Cao, Z.; Fan, Y.; Mi, L.; Wan, W.; Wang, J.
2015-09-01
Data access and interoperability module connects the observation proposals, data, virtual machines and software. According to the unique identifier of PI (principal investigator), an email address or an internal ID, data can be collected by PI's proposals, or by the search interfaces, e.g. conesearch. Files associated with the searched results could be easily transported to cloud storages, including the storage with virtual machines, or several commercial platforms like Dropbox. Benefitted from the standards of IVOA (International Observatories Alliance), VOTable formatted searching result could be sent to kinds of VO software. Latter endeavor will try to integrate more data and connect archives and some other astronomical resources.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
Functional connectivity analysis of resting-state fMRI networks in nicotine dependent patients
NASA Astrophysics Data System (ADS)
Smith, Aria; Ehtemami, Anahid; Fratte, Daniel; Meyer-Baese, Anke; Zavala-Romero, Olmo; Goudriaan, Anna E.; Schmaal, Lianne; Schulte, Mieke H. J.
2016-03-01
Brain imaging studies identified brain networks that play a key role in nicotine dependence-related behavior. Functional connectivity of the brain is dynamic; it changes over time due to different causes such as learning, or quitting a habit. Functional connectivity analysis is useful in discovering and comparing patterns between functional magnetic resonance imaging (fMRI) scans of patients' brains. In the resting state, the patient is asked to remain calm and not do any task to minimize the contribution of external stimuli. The study of resting-state fMRI networks have shown functionally connected brain regions that have a high level of activity during this state. In this project, we are interested in the relationship between these functionally connected brain regions to identify nicotine dependent patients, who underwent a smoking cessation treatment. Our approach is on the comparison of the set of connections between the fMRI scans before and after treatment. We applied support vector machines, a machine learning technique, to classify patients based on receiving the treatment or the placebo. Using the functional connectivity (CONN) toolbox, we were able to form a correlation matrix based on the functional connectivity between different regions of the brain. The experimental results show that there is inadequate predictive information to classify nicotine dependent patients using the SVM classifier. We propose other classification methods be explored to better classify the nicotine dependent patients.
Exploring the Function Space of Deep-Learning Machines
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2018-06-01
The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.
The development of a control system for a small high speed steam microturbine generator system
NASA Astrophysics Data System (ADS)
Alford, A.; Nichol, P.; Saunders, M.; Frisby, B.
2015-08-01
Steam is a widely used energy source. In many situations steam is generated at high pressures and then reduced in pressure through control valves before reaching point of use. An opportunity was identified to convert some of the energy at the point of pressure reduction into electricity. To take advantage of a market identified for small scale systems, a microturbine generator was designed based on a small high speed turbo machine. This machine was packaged with the necessary control valves and systems to allow connection of the machine to the grid. Traditional machines vary the speed of the generator to match the grid frequency. This was not possible due to the high speed of this machine. The characteristics of the rotating unit had to be understood to allow a control that allowed export of energy at the right frequency to the grid under the widest possible range of steam conditions. A further goal of the control system was to maximise the efficiency of generation under all conditions. A further complication was to provide adequate protection for the rotating unit in the event of the loss of connection to the grid. The system to meet these challenges is outlined with the solutions employed and tested for this application.
NASA Astrophysics Data System (ADS)
Dachyar, M.; Risky, S. A.
2014-06-01
Telecommunications company have to improve their business performance despite of the increase customers every year. In Indonesia, the telecommunication company have provided best services, improving operational systems by designing a framework for operational systems of the Internet of Things (IoT) other name of Machine to Machine (M2M). This study was conducted with expert opinion which further processed by the Analytic Hierarchy Process (AHP) to obtain important factor for organizations operational systems, and the Interpretive Structural Modeling (ISM) to determine factors of organization which found drives the biggest power. This study resulted, the greatest weight of SLA & KPI handling problems. The M2M current dashboard and current M2M connectivity have power to affect other factors and has important function for M2M operations roomates system which can be effectively carried out.
Spatial-spectral blood cell classification with microscopic hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng
2017-10-01
Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.
Semantic representation in the white matter pathway
Fang, Yuxing; Wang, Xiaosha; Zhong, Suyu; Song, Luping; Han, Zaizhu; Gong, Gaolang
2018-01-01
Object conceptual processing has been localized to distributed cortical regions that represent specific attributes. A challenging question is how object semantic space is formed. We tested a novel framework of representing semantic space in the pattern of white matter (WM) connections by extending the representational similarity analysis (RSA) to structural lesion pattern and behavioral data in 80 brain-damaged patients. For each WM connection, a neural representational dissimilarity matrix (RDM) was computed by first building machine-learning models with the voxel-wise WM lesion patterns as features to predict naming performance of a particular item and then computing the correlation between the predicted naming score and the actual naming score of another item in the testing patients. This correlation was used to build the neural RDM based on the assumption that if the connection pattern contains certain aspects of information shared by the naming processes of these two items, models trained with one item should also predict naming accuracy of the other. Correlating the neural RDM with various cognitive RDMs revealed that neural patterns in several WM connections that connect left occipital/middle temporal regions and anterior temporal regions associated with the object semantic space. Such associations were not attributable to modality-specific attributes (shape, manipulation, color, and motion), to peripheral picture-naming processes (picture visual similarity, phonological similarity), to broad semantic categories, or to the properties of the cortical regions that they connected, which tended to represent multiple modality-specific attributes. That is, the semantic space could be represented through WM connection patterns across cortical regions representing modality-specific attributes. PMID:29624578
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
A coordination theory for intelligent machines
NASA Technical Reports Server (NTRS)
Wang, Fei-Yue; Saridis, George N.
1990-01-01
A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Thermodynamic work from operational principles
NASA Astrophysics Data System (ADS)
Gallego, R.; Eisert, J.; Wilming, H.
2016-10-01
In recent years we have witnessed a concentrated effort to make sense of thermodynamics for small-scale systems. One of the main difficulties is to capture a suitable notion of work that models realistically the purpose of quantum machines, in an analogous way to the role played, for macroscopic machines, by the energy stored in the idealisation of a lifted weight. Despite several attempts to resolve this issue by putting forward specific models, these are far from realistically capturing the transitions that a quantum machine is expected to perform. In this work, we adopt a novel strategy by considering arbitrary kinds of systems that one can attach to a quantum thermal machine and defining work quantifiers. These are functions that measure the value of a transition and generalise the concept of work beyond those models familiar from phenomenological thermodynamics. We do so by imposing simple operational axioms that any reasonable work quantifier must fulfil and by deriving from them stringent mathematical condition with a clear physical interpretation. Our approach allows us to derive much of the structure of the theory of thermodynamics without taking the definition of work as a primitive. We can derive, for any work quantifier, a quantitative second law in the sense of bounding the work that can be performed using some non-equilibrium resource by the work that is needed to create it. We also discuss in detail the role of reversibility and correlations in connection with the second law. Furthermore, we recover the usual identification of work with energy in degrees of freedom with vanishing entropy as a particular case of our formalism. Our mathematical results can be formulated abstractly and are general enough to carry over to other resource theories than quantum thermodynamics.
Code of Federal Regulations, 2011 CFR
2011-07-01
... separate grounding conductor located within the trailing cable of mobile and portable equipment and... conductor located within the direct-current power cable feeding stationary equipment and connected between... ground conductor connected between stationary equipment and the direct-current grounding medium; or, (d...
Code of Federal Regulations, 2012 CFR
2012-07-01
... separate grounding conductor located within the trailing cable of mobile and portable equipment and... conductor located within the direct-current power cable feeding stationary equipment and connected between... ground conductor connected between stationary equipment and the direct-current grounding medium; or, (d...
Code of Federal Regulations, 2013 CFR
2013-07-01
... separate grounding conductor located within the trailing cable of mobile and portable equipment and... conductor located within the direct-current power cable feeding stationary equipment and connected between... ground conductor connected between stationary equipment and the direct-current grounding medium; or, (d...
Code of Federal Regulations, 2010 CFR
2010-07-01
... separate grounding conductor located within the trailing cable of mobile and portable equipment and... conductor located within the direct-current power cable feeding stationary equipment and connected between... ground conductor connected between stationary equipment and the direct-current grounding medium; or, (d...
Code of Federal Regulations, 2014 CFR
2014-07-01
... separate grounding conductor located within the trailing cable of mobile and portable equipment and... conductor located within the direct-current power cable feeding stationary equipment and connected between... ground conductor connected between stationary equipment and the direct-current grounding medium; or, (d...
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... used at connections to machines of high-pressure hose lines of 1-inch inside diameter or larger, and between high-pressure hose lines of 1-inch inside diameter or larger, where a connection failure would... shall be equipped with automatic pressure-relief valves, pressure gages, and drain valves. (b) Repairs...
DOT National Transportation Integrated Search
2016-12-25
The key objectives of this study were to: 1. Develop advanced analytical techniques that make use of a dynamically configurable connected vehicle message protocol to predict traffic flow regimes in near-real time in a virtual environment and examine ...
Klonoff, David C
2017-07-01
The Internet of Things (IoT) is generating an immense volume of data. With cloud computing, medical sensor and actuator data can be stored and analyzed remotely by distributed servers. The results can then be delivered via the Internet. The number of devices in IoT includes such wireless diabetes devices as blood glucose monitors, continuous glucose monitors, insulin pens, insulin pumps, and closed-loop systems. The cloud model for data storage and analysis is increasingly unable to process the data avalanche, and processing is being pushed out to the edge of the network closer to where the data-generating devices are. Fog computing and edge computing are two architectures for data handling that can offload data from the cloud, process it nearby the patient, and transmit information machine-to-machine or machine-to-human in milliseconds or seconds. Sensor data can be processed near the sensing and actuating devices with fog computing (with local nodes) and with edge computing (within the sensing devices). Compared to cloud computing, fog computing and edge computing offer five advantages: (1) greater data transmission speed, (2) less dependence on limited bandwidths, (3) greater privacy and security, (4) greater control over data generated in foreign countries where laws may limit use or permit unwanted governmental access, and (5) lower costs because more sensor-derived data are used locally and less data are transmitted remotely. Connected diabetes devices almost all use fog computing or edge computing because diabetes patients require a very rapid response to sensor input and cannot tolerate delays for cloud computing.
Li, Hang; Wang, Maolin; Gong, Ya-Nan; Yan, Aixia
2016-01-01
β-secretase (BACE1) is an aspartyl protease, which is considered as a novel vital target in Alzheimer`s disease therapy. We collected a data set of 294 BACE1 inhibitors, and built six classification models to discriminate active and weakly active inhibitors using Kohonen's Self-Organizing Map (SOM) method and Support Vector Machine (SVM) method. Each molecular descriptor was calculated using the program ADRIANA.Code. We adopted two different methods: random method and Self-Organizing Map method, for training/test set split. The descriptors were selected by F-score and stepwise linear regression analysis. The best SVM model Model2C has a good prediction performance on test set with prediction accuracy, sensitivity (SE), specificity (SP) and Matthews correlation coefficient (MCC) of 89.02%, 90%, 88%, 0.78, respectively. Model 1A is the best SOM model, whose accuracy and MCC of the test set were 94.57% and 0.98, respectively. The lone pair electronegativity and polarizability related descriptors importantly contributed to bioactivity of BACE1 inhibitor. The Extended-Connectivity Finger-Prints_4 (ECFP_4) analysis found some vitally key substructural features, which could be helpful for further drug design research. The SOM and SVM models built in this study can be obtained from the authors by email or other contacts.
Grid generation methodology and CFD simulations in sliding vane compressors and expanders
NASA Astrophysics Data System (ADS)
Bianchi, Giuseppe; Rane, Sham; Kovacevic, Ahmed; Cipollone, Roberto; Murgia, Stefano; Contaldi, Giulio
2017-08-01
The limiting factor for the employment of advanced 3D CFD tools in the analysis and design of rotary vane machines is the unavailability of methods for generation of computational grids suitable for fast and reliable numerical analysis. The paper addresses this challenge presenting the development of an analytical grid generation for vane machines that is based on the user defined nodal displacement. In particular, mesh boundaries are defined as parametric curves generated using trigonometrical modelling of the axial cross section of the machine while the distribution of computational nodes is performed using algebraic algorithms with transfinite interpolation, post orthogonalisation and smoothing. Algebraic control functions are introduced for distribution of nodes on the rotor and casing boundaries in order to achieve good grid quality in terms of cell size and expansion. In this way, the moving and deforming fluid domain of the sliding vane machine is discretized and the conservation of intrinsic quantities in ensured by maintaining the cell connectivity and structure. For validation of generated grids, a mid-size air compressor and a small-scale expander for Organic Rankine Cycle applications have been investigated in this paper. Remarks on implementation of the mesh motion algorithm, stability and robustness experienced with the ANSYS CFX solver as well as the obtained flow results are presented.
Myocardial perfusion characteristics during machine perfusion for heart transplantation.
Peltz, Matthias; Cobert, Michael L; Rosenbaum, David H; West, LaShondra M; Jessen, Michael E
2008-08-01
Optimal parameters for machine perfusion preservation of hearts prior to transplantation have not been determined. We sought to define regional myocardial perfusion characteristics of a machine perfusion device over a range of conditions in a large animal model. Dog hearts were connected to a perfusion device (LifeCradle, Organ Transport Systems, Inc, Frisco, TX) and cold perfused at differing flow rates (1) at initial device startup and (2) over the storage interval. Myocardial perfusion was determined by entrapment of colored microspheres. Myocardial oxygen consumption (MVO(2)) was estimated from inflow and outflow oxygen differences. Intra-myocardial lactate was determined by (1)H magnetic resonance spectroscopy. MVO(2) and tissue perfusion increased up to flows of 15 mL/100 g/min, and the ratio of epicardial:endocardial perfusion remained near 1:1. Perfusion at lower flow rates and when low rates were applied during startup resulted in decreased capillary flow and greater non-nutrient flow. Increased tissue perfusion correlated with lower myocardial lactate accumulation but greater edema. Myocardial perfusion is influenced by flow rates during device startup and during the preservation interval. Relative declines in nutrient flow at low flow rates may reflect greater aortic insufficiency. These factors may need to be considered in clinical transplant protocols using machine perfusion.
Vapor mediated droplet interactions - models and mechanisms (Part 2)
NASA Astrophysics Data System (ADS)
Benusiglio, Adrien; Cira, Nate; Prakash, Manu
2014-11-01
When deposited on clean glass a two-component binary mixture of propylene glycol and water is energetically inclined to spread, as both pure liquids do. Instead the mixture forms droplets stabilized by evaporation induced surface tension gradients, giving them unique properties such as negligible hysteresis. When two of these special droplets are deposited several radii apart they attract each other. The vapor from one droplet destabilizes the other, resulting in an attraction force which brings both droplets together. We present a flux-based model for droplet stabilization and a model which connects the vapor profile to net force. These simple models capture the static and dynamic experimental trends, and our fundamental understanding of these droplets and their interactions allowed us to build autonomous fluidic machines.
Cao, Longlong; Guo, Shuixia; Xue, Zhimin; Hu, Yong; Liu, Haihong; Mwansisya, Tumbwene E; Pu, Weidan; Yang, Bo; Liu, Chang; Feng, Jianfeng; Chen, Eric Y H; Liu, Zhening
2014-02-01
Aberrant brain functional connectivity patterns have been reported in major depressive disorder (MDD). It is unknown whether they can be used in discriminant analysis for diagnosis of MDD. In the present study we examined the efficiency of discriminant analysis of MDD by individualized computer-assisted diagnosis. Based on resting-state functional magnetic resonance imaging data, a new approach was adopted to investigate functional connectivity changes in 39 MDD patients and 37 well-matched healthy controls. By using the proposed feature selection method, we identified significant altered functional connections in patients. They were subsequently applied to our analysis as discriminant features using a support vector machine classification method. Furthermore, the relative contribution of functional connectivity was estimated. After subset selection of high-dimension features, the support vector machine classifier reached up to approximately 84% with leave-one-out training during the discrimination process. Through summarizing the classification contribution of functional connectivities, we obtained four obvious contribution modules: inferior orbitofrontal module, supramarginal gyrus module, inferior parietal lobule-posterior cingulated gyrus module and middle temporal gyrus-inferior temporal gyrus module. The experimental results demonstrated that the proposed method is effective in discriminating MDD patients from healthy controls. Functional connectivities might be useful as new biomarkers to assist clinicians in computer auxiliary diagnosis of MDD. © 2013 The Authors. Psychiatry and Clinical Neurosciences © 2013 Japanese Society of Psychiatry and Neurology.
Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin
2015-01-01
Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435
A human cadaver fascial compartment pressure measurement model.
Messina, Frank C; Cooper, Dylan; Huffman, Gretchen; Bartkus, Edward; Wilbur, Lee
2013-10-01
Fresh human cadavers provide an effective model for procedural training. Currently, there are no realistic models to teach fascial compartment pressure measurement. We created a human cadaver fascial compartment pressure measurement model and studied its feasibility with a pre-post design. Three faculty members, following instructions from a common procedure textbook, used a standard handheld intra-compartment pressure monitor (Stryker(®), Kalamazoo, MI) to measure baseline pressures ("unembalmed") in the anterior, lateral, deep posterior, and superficial posterior compartments of the lower legs of a fresh human cadaver. The right femoral artery was then identified by superficial dissection, cannulated distally towards the lower leg, and connected to a standard embalming machine. After a 5-min infusion, the same three faculty members re-measured pressures ("embalmed") of the same compartments on the cannulated right leg. Unembalmed and embalmed readings for each compartment, and baseline readings for each leg, were compared using a two-sided paired t-test. The mean baseline compartment pressures did not differ between the right and left legs. Using the embalming machine, compartment pressure readings increased significantly over baseline for three of four fascial compartments; all in mm Hg (±SD): anterior from 40 (±9) to 143 (±44) (p = 0.08); lateral from 22 (±2.5) to 160 (±4.3) (p < 0.01); deep posterior from 34 (±7.9) to 161 (±15) (p < 0.01); superficial posterior from 33 (±0) to 140 (±13) (p < 0.01). We created a novel and measurable fascial compartment pressure measurement model in a fresh human cadaver using a standard embalming machine. Set-up is minimal and the model can be incorporated into teaching curricula. Copyright © 2013 Elsevier Inc. All rights reserved.
Parallel matrix multiplication on the Connection Machine
NASA Technical Reports Server (NTRS)
Tichy, Walter F.
1988-01-01
Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Saltzman, Jeff S.
1992-01-01
We describe the development of a structured adaptive mesh algorithm (AMR) for the Connection Machine-2 (CM-2). We develop a data layout scheme that preserves locality even for communication between fine and coarse grids. On 8K of a 32K machine we achieve performance slightly less than 1 CPU of the Cray Y-MP. We apply our algorithm to an inviscid compressible flow problem.
36 CFR 1254.84 - How may I use a debit card for copiers in the Washington, DC, area?
Code of Federal Regulations, 2011 CFR
2011-07-01
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION PUBLIC AVAILABILITY AND USE USING RECORDS AND DONATED... machines located in the research rooms. Inserting the debit card into a card reader connected to the copier... add value to your card using the vending machine in the research room or at the Cashier's Office. We...
NASA Technical Reports Server (NTRS)
Gangal, M. D.; Isenberg, L.; Lewis, E. V.
1985-01-01
Proposed system offers safety and large return on investment. System, operating by year 2000, employs machines and processes based on proven principles. According to concept, line of parallel machines, connected in groups of four to service modules, attacks face of coal seam. High-pressure water jets and central auger on each machine break face. Jaws scoop up coal chunks, and auger grinds them and forces fragments into slurry-transport system. Slurry pumped through pipeline to point of use. Concept for highly automated coal-mining system increases productivity, makes mining safer, and protects health of mine workers.
Bray, James William [Niskayuna, NY; Garces, Luis Jose [Niskayuna, NY
2012-03-13
The disclosed technology is a cryogenic static exciter. The cryogenic static exciter is connected to a synchronous electric machine that has a field winding. The synchronous electric machine is cooled via a refrigerator or cryogen like liquid nitrogen. The static exciter is in communication with the field winding and is operating at ambient temperature. The static exciter receives cooling from a refrigerator or cryogen source, which may also service the synchronous machine, to selected areas of the static exciter and the cooling selectively reduces the operating temperature of the selected areas of the static exciter.
30 CFR 75.1730 - Compressed air; general; compressed air systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... shall be used at connections to machines of high-pressure hose lines of three-fourths of an inch inside diameter or larger, and between high-pressure hose lines of three-fourths of an inch inside diameter or larger, where a connection failure would create a hazard. For purposes of this paragraph, high-pressure...
The design of the new LHC connection cryostats
NASA Astrophysics Data System (ADS)
Vande Craen, A.; Barlow, G.; Eymin, C.; Moretti, M.; Parma, V.; Ramos, D.
2017-12-01
In the frame of the High Luminosity upgrade of the LHC, improved collimation schemes are needed to cope with the superconducting magnet quench limitations due to the increasing beam intensities and particle debris produced in the collision points. Two new TCLD collimators have to be installed on either side of the ALICE experiment to intercept heavy-ion particle debris. Beam optics solutions were found to place these collimators in the continuous cryostat of the machine, in the locations where connection cryostats, bridging a gap of about 13 m between adjacent magnets, are already present. It is therefore planned to replace these connection cryostats with two new shorter ones separated by a bypass cryostat allowing the collimators to be placed close to the beam pipes. The connection cryostats, of a new design when compared to the existing ones, will still have to ensure the continuity of the technical systems of the machine cryostat (i.e. beam lines, cryogenic and electrical circuits, insulation vacuum). This paper describes the functionalities and the design solutions implemented, as well as the plans for their construction.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Geveci, Berk
2014-11-01
The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less
Ordered fast fourier transforms on a massively parallel hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Tong, Charles; Swarztrauber, Paul N.
1989-01-01
Design alternatives for ordered Fast Fourier Transformation (FFT) algorithms were examined on massively parallel hypercube multiprocessors such as the Connection Machine. Particular emphasis is placed on reducing communication which is known to dominate the overall computing time. To this end, the order and computational phases of the FFT were combined, and the sequence to processor maps that reduce communication were used. The class of ordered transforms is expanded to include any FFT in which the order of the transform is the same as that of the input sequence. Two such orderings are examined, namely, standard-order and A-order which can be implemented with equal ease on the Connection Machine where orderings are determined by geometries and priorities. If the sequence has N = 2 exp r elements and the hypercube has P = 2 exp d processors, then a standard-order FFT can be implemented with d + r/2 + 1 parallel transmissions. An A-order sequence can be transformed with 2d - r/2 parallel transmissions which is r - d + 1 fewer than the standard order. A parallel method for computing the trigonometric coefficients is presented that does not use trigonometric functions or interprocessor communication. A performance of 0.9 GFLOPS was obtained for an A-order transform on the Connection Machine.
Method and device for determining bond separation strength using induction heating
NASA Technical Reports Server (NTRS)
Coultrip, Robert H. (Inventor); Johnson, Samuel D. (Inventor); Copeland, Carl E. (Inventor); Phillips, W. Morris (Inventor); Fox, Robert L. (Inventor)
1994-01-01
An induction heating device includes an induction heating gun which includes a housing, a U-shaped pole piece having two spaced apart opposite ends defining a gap there between, the U-shaped pole piece being mounted in one end of the housing, and a tank circuit including an induction coil wrapped around the pole piece and a capacitor connected to the induction coil. A power source is connected to the tank circuit. A pull test machine is provided having a stationary chuck and a movable chuck, the two chucks holding two test pieces bonded together at a bond region. The heating gun is mounted on the pull test machine in close proximity to the bond region of the two test pieces, whereby when the tank circuit is energized, the two test pieces are heated by induction heating while a tension load is applied to the two test pieces by the pull test machine to determine separation strength of the bond region.
Lithofacies classification of the Barnett Shale gas reservoir using neural network
NASA Astrophysics Data System (ADS)
Aliouane, Leila; Ouadfeul, Sid-Ali
2017-04-01
Here, we show the contribution of the artificial intelligence such as neural network to predict the lithofacies in the lower Barnett shale gas reservoir. The Multilayer Perceptron (MLP) neural network with Hidden Weight Optimization Algorithm is used. The input is raw well-logs data recorded in a horizontal well drilled in the Lower Barnett shale formation, however the output is the concentration of the Clay and the Quartz calculated using the ELAN model and confirmed with the core rock measurement. After training of the MLP machine weights of connection are calculated, the raw well-logs data of two other horizontal wells drilled in the same reservoir are propagated though the neural machine and an output is calculated. Comparison between the predicted and measured clay and Quartz concentrations in these two horizontal wells shows the ability of neural network to improve shale gas reservoirs characterization.
Report on the formal specification and partial verification of the VIPER microprocessor
NASA Technical Reports Server (NTRS)
Brock, Bishop; Hunt, Warren A., Jr.
1991-01-01
The VIPER microprocessor chip is partitioned into four levels of abstractions. At the highest level, VIPER is described with decreasingly abstract sets of functions in LCF-LSM. At the lowest level are the gate-level models in proprietary CAD languages. The block-level and gate-level specifications are also given in the ELLA simulation language. Among VIPER's deficiencies are the fact that there is no notion of external events in the top-level specification, and it is impossible to use the top-level specifications to prove abstract properties of programs running on VIPER computers. There is no complete proof that the gate-level specifications implement the top-level specifications. Cohn's proof that the major-state machine correctly implements the top-level specifications has no formal connection with any of the other proof attempts. None of the latter address resetting the machine, memory timeout, forced error, or single step modes.
Grid-connected in-stream hydroelectric generation based on the doubly fed induction machine
NASA Astrophysics Data System (ADS)
Lenberg, Timothy J.
Within the United States, there is a growing demand for new environmentally friendly power generation. This has led to a surge in wind turbine development. Unfortunately, wind is not a stable prime mover, but water is. Why not apply the advances made for wind to in-stream hydroelectric generation? One important advancement is the creation of the Doubly Fed Induction Machine (DFIM). This thesis covers the application of a gearless DFIM topology for hydrokinetic generation. After providing background, this thesis presents many of the options available for the mechanical portion of the design. A mechanical turbine is then specified. Next, a method is presented for designing a DFIM including the actual design for this application. In Chapter 4, a simulation model of the system is presented, complete with a control system that maximizes power generation based on water speed. This section then goes on to present simulation results demonstrating proper operation.
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-01-01
Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191
Thermodynamic metrics and optimal paths.
Sivak, David A; Crooks, Gavin E
2012-05-11
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
36 CFR 1254.84 - How may I use a debit card for copiers in the Washington, DC, area?
Code of Federal Regulations, 2012 CFR
2012-07-01
...'s Office is closed or at any other time during the hours research rooms are open as cited in part... machines located in the research rooms. Inserting the debit card into a card reader connected to the copier... add value to your card using the vending machine in the research room or at the Cashier's Office. We...
36 CFR 1254.84 - How may I use a debit card for copiers in the Washington, DC, area?
Code of Federal Regulations, 2014 CFR
2014-07-01
...'s Office is closed or at any other time during the hours research rooms are open as cited in part... machines located in the research rooms. Inserting the debit card into a card reader connected to the copier... add value to your card using the vending machine in the research room or at the Cashier's Office. We...
Bio-Inspired Human-Level Machine Learning
2015-10-25
extensions to high-level cognitive functions such as anagram solving problem. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...extensions to high-level cognitive functions such as anagram solving problem. We expect that the bio-inspired human-level machine learning combined with...numbers of 1011 neurons and 1014 synaptic connections in the human brain. In previous work, we experimentally demonstrated the feasibility of cognitive
Magnetostrictive Vibration Damper and Energy Harvester for Rotating Machinery
NASA Technical Reports Server (NTRS)
Deng, Zhangxian; Asnani, Vivake M.; Dapino, Marcelo J.
2015-01-01
Vibrations generated by machine driveline components can cause excessive noise and structural damage. Magnetostrictive materials, including Galfenol (iron-gallium alloys) and Terfenol-D (terbium-iron-dysprosium alloys), are able to convert mechanical energy to magnetic energy. A magnetostrictive vibration ring is proposed, which generates electrical energy and dampens vibration, when installed in a machine driveline. A 2D axisymmetric finite element (FE) model incorporating magnetic, mechanical, and electrical dynamics is constructed in COMSOL Multiphysics. Based on the model, a parametric study considering magnetostrictive material geometry, pickup coil size, bias magnet strength, flux path design, and electrical load is conducted to maximize loss factor and average electrical output power. By connecting various resistive loads to the pickup coil, the maximum loss factors for Galfenol and Terfenol-D due to electrical energy loss are identified as 0.14 and 0.34, respectively. The maximum average electrical output power for Galfenol and Terfenol-D is 0.21 W and 0.58 W, respectively. The loss factors for Galfenol and Terfenol-D are increased to 0.59 and 1.83, respectively, by using an L-C resonant circuit.
NASA Technical Reports Server (NTRS)
Kahraman, Ahmet
2002-01-01
In this study, design requirements for a dynamically viable, four-square type gear test machine are investigated. Variations of four-square type gear test machines have been in use for durability and dynamics testing of both parallel- and cross-axis gear set. The basic layout of these machines is illustrated. The test rig is formed by two gear pairs, of the same reduction ratio, a test gear pair and a reaction gear pair, connected to each other through shafts of certain torsional flexibility to form an efficient, closed-loop system. A desired level of constant torque is input to the circuit through mechanical (a split coupling with a torque arm) or hydraulic (a hydraulic actuator) means. The system is then driven at any desired speed by a small DC motor. The main task in hand is the isolation of the test gear pair from the reaction gear pair under dynamic conditions. Any disturbances originated at the reaction gear mesh might potentially travel to the test gearbox, altering the dynamic loading conditions of the test gear mesh, and hence, influencing the outcome of the durability or dynamics test. Therefore, a proper design of connecting structures becomes a major priority. Also, equally important is the issue of how close the operating speed of the machine is to the resonant frequencies of the gear meshes. This study focuses on a detailed analysis of the current NASA Glenn Research Center gear pitting test machine for evaluation of its resonance and vibration isolation characteristics. A number of these machines as the one illustrated has been used over last 30 years to establish an extensive database regarding the influence of the gear materials, processes surface treatments and lubricants on gear durability. This study is intended to guide an optimum design of next generation test machines for the most desirable dynamic characteristics.
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
50 GFlops molecular dynamics on the Connection Machine 5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomdahl, P.S.; Tamayo, P.; Groenbech-Jensen, N.
1993-12-31
The authors present timings and performance numbers for a new short range three dimensional (3D) molecular dynamics (MD) code, SPaSM, on the Connection Machine-5 (CM-5). They demonstrate that runs with more than 10{sup 8} particles are now possible on massively parallel MIMD computers. To the best of their knowledge this is at least an order of magnitude more particles than what has previously been reported. Typical production runs show sustained performance (including communication) in the range of 47--50 GFlops on a 1024 node CM-5 with vector units (VUs). The speed of the code scales linearly with the number of processorsmore » and with the number of particles and shows 95% parallel efficiency in the speedup.« less
Towards a Multiscale Approach to Cybersecurity Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less
Automatic ball bar for a coordinate measuring machine
Jostlein, H.
1997-07-15
An automatic ball bar for a coordinate measuring machine determines the accuracy of a coordinate measuring machine having at least one servo drive. The apparatus comprises a first and second gauge ball connected by a telescoping rigid member. The rigid member includes a switch such that inward radial movement of the second gauge ball relative to the first gauge ball causes activation of the switch. The first gauge ball is secured in a first magnetic socket assembly in order to maintain the first gauge ball at a fixed location with respect to the coordinate measuring machine. A second magnetic socket assembly secures the second gauge ball to the arm or probe holder of the coordinate measuring machine. The second gauge ball is then directed by the coordinate measuring machine to move radially inward from a point just beyond the length of the ball bar until the switch is activated. Upon switch activation, the position of the coordinate measuring machine is determined and compared to known ball bar length such that the accuracy of the coordinate measuring machine can be determined. 5 figs.
Automatic ball bar for a coordinate measuring machine
Jostlein, Hans
1997-01-01
An automatic ball bar for a coordinate measuring machine determines the accuracy of a coordinate measuring machine having at least one servo drive. The apparatus comprises a first and second gauge ball connected by a telescoping rigid member. The rigid member includes a switch such that inward radial movement of the second gauge ball relative to the first gauge ball causes activation of the switch. The first gauge ball is secured in a first magnetic socket assembly in order to maintain the first gauge ball at a fixed location with respect to the coordinate measuring machine. A second magnetic socket assembly secures the second gauge ball to the arm or probe holder of the coordinate measuring machine. The second gauge ball is then directed by the coordinate measuring machine to move radially inward from a point just beyond the length of the ball bar until the switch is activated. Upon switch activation, the position of the coordinate measuring machine is determined and compared to known ball bar length such that the accuracy of the coordinate measuring machine can be determined.
Atwood and Poggendorff: an insightful analogy
NASA Astrophysics Data System (ADS)
Coelho, R. L.; Borges, P. F.; Karam, R.
2016-11-01
Atwood’s treatise, in which the Atwood machine appears, was published in 1784. About 70 years later, Poggendorff showed experimentally that the weight of an Atwood machine is reduced when it is brought to motion. In the present paper, a twofold connection between this experiment and the Atwood machine is established. Firstly, if the Poggendorff apparatus is taken as an ideal one, the equations of motion of the apparatus coincide with the equations of motion of the compound Atwood machine. Secondly, if the Poggendorff apparatus, which works as a lever, is rebalanced, the equations of this equilibrium provide us with the solution for a compound Atwood machine with the same bodies. This analogy is pedagogically useful because it illustrates a common strategy to transform a dynamic in a static situation improving students’ comprehension of Newton’s laws and equilibrium.
Prediction of brain maturity in infants using machine-learning algorithms.
Smyser, Christopher D; Dosenbach, Nico U F; Smyser, Tara A; Snyder, Abraham Z; Rogers, Cynthia E; Inder, Terrie E; Schlaggar, Bradley L; Neil, Jeffrey J
2016-08-01
Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23-29weeks of gestation and without moderate-severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p<0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. Copyright © 2016 Elsevier Inc. All rights reserved.
Prediction of brain maturity in infants using machine-learning algorithms
Smyser, Christopher D.; Dosenbach, Nico U.F.; Smyser, Tara A.; Snyder, Abraham Z.; Rogers, Cynthia E.; Inder, Terrie E.; Schlaggar, Bradley L.; Neil, Jeffrey J.
2016-01-01
Recent resting-state functional MRI investigations have demonstrated that much of the large-scale functional network architecture supporting motor, sensory and cognitive functions in older pediatric and adult populations is present in term- and prematurely-born infants. Application of new analytical approaches can help translate the improved understanding of early functional connectivity provided through these studies into predictive models of neurodevelopmental outcome. One approach to achieving this goal is multivariate pattern analysis, a machine-learning, pattern classification approach well-suited for high-dimensional neuroimaging data. It has previously been adapted to predict brain maturity in children and adolescents using structural and resting state-functional MRI data. In this study, we evaluated resting state-functional MRI data from 50 preterm-born infants (born at 23–29 weeks of gestation and without moderate–severe brain injury) scanned at term equivalent postmenstrual age compared with data from 50 term-born control infants studied within the first week of life. Using 214 regions of interest, binary support vector machines distinguished term from preterm infants with 84% accuracy (p < 0.0001). Inter- and intra-hemispheric connections throughout the brain were important for group categorization, indicating that widespread changes in the brain's functional network architecture associated with preterm birth are detectable by term equivalent age. Support vector regression enabled quantitative estimation of birth gestational age in single subjects using only term equivalent resting state-functional MRI data, indicating that the present approach is sensitive to the degree of disruption of brain development associated with preterm birth (using gestational age as a surrogate for the extent of disruption). This suggests that support vector regression may provide a means for predicting neurodevelopmental outcome in individual infants. PMID:27179605
Santangelo, Bruna; Robin, Astrid; Simpson, Keith; Potier, Julie; Guichardant, Michel; Portier, Karine
2017-01-01
Xenon, due to its interesting anesthetic properties, could improve the quality of anesthesia protocols in horses despite its high price. This study aimed to modify and test an anesthesia machine capable of delivering xenon to a horse. An equine anesthesia machine (Tafonius, Vetronic Services Ltd., UK) was modified by including a T-connector in the valve block to introduce xenon, so that the xenon was pushed into the machine cylinder by the expired gases. A xenon analyzer was connected to the expiratory limb of the patient circuit. The operation of the machine was modeled and experimentally tested for denitrogenation, wash-in, and maintenance phases. The system was considered to consist of two compartments, one being the horse's lungs, the other being the machine cylinder and circuit. A 15-year-old, 514-kg, healthy gelding horse was anesthetized for 70 min using acepromazine, romifidine, morphine, diazepam, and ketamine. Anesthesia was maintained with xenon and oxygen, co-administered with lidocaine. Ventilation was controlled. Cardiorespiratory variables, expired fraction of xenon (FeXe), blood gases were measured and xenon was detected in plasma. Recovery was unassisted and recorded. FeXe remained around 65%, using a xenon total volume of 250 L. Five additional boli of ketamine were required to maintain anesthesia. PaO 2 was 45 ± 1 mmHg. The recovery was calm. Xenon was detected in blood during the entire administration time. This pilot study describes how to deliver xenon to a horse. Although many technical problems were encountered, their correction could guide future endeavors to study the use of xenon in horses.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
Chip breaking system for automated machine tool
Arehart, Theodore A.; Carey, Donald O.
1987-01-01
The invention is a rotary selectively directional valve assembly for use in an automated turret lathe for directing a stream of high pressure liquid machining coolant to the interface of a machine tool and workpiece for breaking up ribbon-shaped chips during the formation thereof so as to inhibit scratching or other marring of the machined surfaces by these ribbon-shaped chips. The valve assembly is provided by a manifold arrangement having a plurality of circumferentially spaced apart ports each coupled to a machine tool. The manifold is rotatable with the turret when the turret is positioned for alignment of a machine tool in a machining relationship with the workpiece. The manifold is connected to a non-rotational header having a single passageway therethrough which conveys the high pressure coolant to only the port in the manifold which is in registry with the tool disposed in a working relationship with the workpiece. To position the machine tools the turret is rotated and one of the tools is placed in a material-removing relationship of the workpiece. The passageway in the header and one of the ports in the manifold arrangement are then automatically aligned to supply the machining coolant to the machine tool workpiece interface for breaking up of the chips as well as cooling the tool and workpiece during the machining operation.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
Implementation of a parallel unstructured Euler solver on the CM-5
NASA Technical Reports Server (NTRS)
Morano, Eric; Mavriplis, D. J.
1995-01-01
An efficient unstructured 3D Euler solver is parallelized on a Thinking Machine Corporation Connection Machine 5, distributed memory computer with vectoring capability. In this paper, the single instruction multiple data (SIMD) strategy is employed through the use of the CM Fortran language and the CMSSL scientific library. The performance of the CMSSL mesh partitioner is evaluated and the overall efficiency of the parallel flow solver is discussed.
36 CFR § 1254.84 - How may I use a debit card for copiers in the Washington, DC, area?
Code of Federal Regulations, 2013 CFR
2013-07-01
...'s Office is closed or at any other time during the hours research rooms are open as cited in part... machines located in the research rooms. Inserting the debit card into a card reader connected to the copier... add value to your card using the vending machine in the research room or at the Cashier's Office. We...
Owen, Whitney H.
1980-01-01
A polyphase rotary induction machine for use as a motor or generator utilizing a single rotor assembly having two series connected sets of rotor windings, a first stator winding disposed around the first rotor winding and means for controlling the current induced in one set of the rotor windings compared to the current induced in the other set of the rotor windings. The rotor windings may be wound rotor windings or squirrel cage windings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trujillo, Angelina Michelle
Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.
14. View to the east up the Sugar River. The ...
14. View to the east up the Sugar River. The 1920 enclosed wooden footbridge connected the Chain Machine Building to the company's power plant, pattern shop, and foundry. The Sullivan Machinery Co. Erecting Shop and Machine Shops are in the center of the photo, and the Baltimore truss bridge is visible in the background. - Sullivan Machinery Company, Main Street between Pearl & Water Streets, Claremont, Sullivan County, NH
Optimal quantum cloning based on the maximin principle by using a priori information
NASA Astrophysics Data System (ADS)
Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming
2016-10-01
We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.
Operation of micro and molecular machines: a new concept with its origins in interface science.
Ariga, Katsuhiko; Ishihara, Shinsuke; Izawa, Hironori; Xia, Hong; Hill, Jonathan P
2011-03-21
A landmark accomplishment of nanotechnology would be successful fabrication of ultrasmall machines that can work like tweezers, motors, or even computing devices. Now we must consider how operation of micro- and molecular machines might be implemented for a wide range of applications. If these machines function only under limited conditions and/or require specialized apparatus then they are useless for practical applications. Therefore, it is important to carefully consider the access of functionality of the molecular or nanoscale systems by conventional stimuli at the macroscopic level. In this perspective, we will outline the position of micro- and molecular machines in current science and technology. Most of these machines are operated by light irradiation, application of electrical or magnetic fields, chemical reactions, and thermal fluctuations, which cannot always be applied in remote machine operation. We also propose strategies for molecular machine operation using the most conventional of stimuli, that of macroscopic mechanical force, achieved through mechanical operation of molecular machines located at an air-water interface. The crucial roles of the characteristics of an interfacial environment, i.e. connection between macroscopic dimension and nanoscopic function, and contact of media with different dielectric natures, are also described.
Towards a generalized energy prediction model for machine tools
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan
2017-01-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
ERIC Educational Resources Information Center
Shohet, Linda, Ed.
1996-01-01
This document contains four issues of a journal that aims to connect literacy in the schools, the community, and the workplace. Each issue also contains an insert focusing on media literacy. Some of the topics covered in the spring 1995 issue include the following: positioning literacy--naming literacy; literacy and machines--an overview of the…
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-01
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-30
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
Park, Ji Eun; Park, Bumwoo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun
2017-01-01
To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal ( p < 0.001) and supramarginal gyrus ( p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.
Rapid prototyping and stereolithography in dentistry
Nayar, Sanjna; Bhuminathan, S.; Bhat, Wasim Manzoor
2015-01-01
The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena. PMID:26015715
Rapid prototyping and stereolithography in dentistry.
Nayar, Sanjna; Bhuminathan, S; Bhat, Wasim Manzoor
2015-04-01
The word rapid prototyping (RP) was first used in mechanical engineering field in the early 1980s to describe the act of producing a prototype, a unique product, the first product, or a reference model. In the past, prototypes were handmade by sculpting or casting, and their fabrication demanded a long time. Any and every prototype should undergo evaluation, correction of defects, and approval before the beginning of its mass or large scale production. Prototypes may also be used for specific or restricted purposes, in which case they are usually called a preseries model. With the development of information technology, three-dimensional models can be devised and built based on virtual prototypes. Computers can now be used to create accurately detailed projects that can be assessed from different perspectives in a process known as computer aided design (CAD). To materialize virtual objects using CAD, a computer aided manufacture (CAM) process has been developed. To transform a virtual file into a real object, CAM operates using a machine connected to a computer, similar to a printer or peripheral device. In 1987, Brix and Lambrecht used, for the first time, a prototype in health care. It was a three-dimensional model manufactured using a computer numerical control device, a type of machine that was the predecessor of RP. In 1991, human anatomy models produced with a technology called stereolithography were first used in a maxillofacial surgery clinic in Viena.
A consideration of the operation of automatic production machines.
Hoshi, Toshiro; Sugimoto, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.
A consideration of the operation of automatic production machines
HOSHI, Toshiro; SUGIMOTO, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation − operation for which quick performance is required (operation that is not permitted to be delayed) − and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as “asymmetric on the time-axis”. Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis. PMID:25739898
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
The scheme machine: A case study in progress in design derivation at system levels
NASA Technical Reports Server (NTRS)
Johnson, Steven D.
1995-01-01
The Scheme Machine is one of several design projects of the Digital Design Derivation group at Indiana University. It differs from the other projects in its focus on issues of system design and its connection to surrounding research in programming language semantics, compiler construction, and programming methodology underway at Indiana and elsewhere. The genesis of the project dates to the early 1980's, when digital design derivation research branched from the surrounding research effort in programming languages. Both branches have continued to develop in parallel, with this particular project serving as a bridge. However, by 1990 there remained little real interaction between the branches and recently we have undertaken to reintegrate them. On the software side, researchers have refined a mathematically rigorous (but not mechanized) treatment starting with the fully abstract semantic definition of Scheme and resulting in an efficient implementation consisting of a compiler and virtual machine model, the latter typically realized with a general purpose microprocessor. The derivation includes a number of sophisticated factorizations and representations and is also deep example of the underlying engineering methodology. The hardware research has created a mechanized algebra supporting the tedious and massive transformations often seen at lower levels of design. This work has progressed to the point that large scale devices, such as processors, can be derived from first-order finite state machine specifications. This is roughly where the language oriented research stops; thus, together, the two efforts establish a thread from the highest levels of abstract specification to detailed digital implementation. The Scheme Machine project challenges hardware derivation research in several ways, although the individual components of the system are of a similar scale to those we have worked with before. The machine has a custom dual-ported memory to support garbage collection. It consists of four tightly coupled processes--processor, collector, allocator, memory--with a very non-trivial synchronization relationship. Finally, there are deep issues of representation for the run-time objects of a symbolic processing language. The research centers on verification through integrated formal reasoning systems, but is also involved with modeling and prototyping environments. Since the derivation algebra is basd on an executable modeling language, there is opportunity to incorporate design animation in the design process. We are looking for ways to move smoothly and incrementally from executable specifications into hardware realization. For example, we can run the garbage collector specification, a Scheme program, directly against the physical memory prototype, and similarly, the instruction processor model against the heap implementation.
Guevara, Edgar; Pierre, Wyston C.; Tessier, Camille; Akakpo, Luis; Londono, Irène; Lesage, Frédéric; Lodygensky, Gregory A.
2017-01-01
Very preterm newborns have an increased risk of developing an inflammatory cerebral white matter injury that may lead to severe neuro-cognitive impairment. In this study we performed functional connectivity (fc) analysis using resting-state optical imaging of intrinsic signals (rs-OIS) to assess the impact of inflammation on resting-state networks (RSN) in a pre-clinical model of perinatal inflammatory brain injury. Lipopolysaccharide (LPS) or saline injections were administered in postnatal day (P3) rat pups and optical imaging of intrinsic signals were obtained 3 weeks later. (rs-OIS) fc seed-based analysis including spatial extent were performed. A support vector machine (SVM) was then used to classify rat pups in two categories using fc measures and an artificial neural network (ANN) was implemented to predict lesion size from those same fc measures. A significant decrease in the spatial extent of fc statistical maps was observed in the injured group, across contrasts and seeds (*p = 0.0452 for HbO2 and **p = 0.0036 for HbR). Both machine learning techniques were applied successfully, yielding 92% accuracy in group classification and a significant correlation r = 0.9431 in fractional lesion volume prediction (**p = 0.0020). Our results suggest that fc is altered in the injured newborn brain, showing the long-standing effect of inflammation. PMID:28725174
Machine learnt bond order potential to model metal-organic (Co-C) heterostructures.
Narayanan, Badri; Chan, Henry; Kinaci, Alper; Sen, Fatih G; Gray, Stephen K; Chan, Maria K Y; Sankaranarayanan, Subramanian K R S
2017-11-30
A fundamental understanding of the inter-relationships between structure, morphology, atomic scale dynamics, chemistry, and physical properties of mixed metallic-covalent systems is essential to design novel functional materials for applications in flexible nano-electronics, energy storage and catalysis. To achieve such knowledge, it is imperative to develop robust and computationally efficient atomistic models that describe atomic interactions accurately within a single framework. Here, we present a unified Tersoff-Brenner type bond order potential (BOP) for a Co-C system, trained against lattice parameters, cohesive energies, equation of state, and elastic constants of different crystalline phases of cobalt as well as orthorhombic Co 2 C derived from density functional theory (DFT) calculations. The independent BOP parameters are determined using a combination of supervised machine learning (genetic algorithms) and local minimization via the simplex method. Our newly developed BOP accurately describes the structural, thermodynamic, mechanical, and surface properties of both the elemental components as well as the carbide phases, in excellent accordance with DFT calculations and experiments. Using our machine-learnt BOP potential, we performed large-scale molecular dynamics simulations to investigate the effect of metal/carbon concentration on the structure and mechanical properties of porous architectures obtained via self-assembly of cobalt nanoparticles and fullerene molecules. Such porous structures have implications in flexible electronics, where materials with high electrical conductivity and low elastic stiffness are desired. Using unsupervised machine learning (clustering), we identify the pore structure, pore-distribution, and metallic conduction pathways in self-assembled structures at different C/Co ratios. We find that as the C/Co ratio increases, the connectivity between the Co nanoparticles becomes limited, likely resulting in low electrical conductivity; on the other hand, such C-rich hybrid structures are highly flexible (i.e., low stiffness). The BOP model developed in this work is a valuable tool to investigate atomic scale processes, structure-property relationships, and temperature/pressure response of Co-C systems, as well as design organic-inorganic hybrid structures with a desired set of properties.
A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools
NASA Technical Reports Server (NTRS)
Warren, G. P.; Seaton, J. M.
1996-01-01
Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.
Utilization of building information modeling in infrastructure’s design and construction
NASA Astrophysics Data System (ADS)
Zak, Josef; Macadam, Helen
2017-09-01
Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.
NASA Astrophysics Data System (ADS)
DSouza, Adora M.; Abidin, Anas Z.; Chockanathan, Udaysankar; Wismüller, Axel
2018-03-01
In this study, we investigate whether there are discernable changes in influence that brain regions have on themselves once patients show symptoms of HIV Associated Neurocognitive Disorder (HAND) using functional MRI (fMRI). Simple functional connectivity measures, such as correlation cannot reveal such information. To this end, we use mutual connectivity analysis (MCA) with Local Models (LM), which reveals a measure of influence in terms of predictability. Once such measures of interaction are obtained, we train two classifiers to characterize difference in patterns of regional self-influence between healthy subjects and subjects presenting with HAND symptoms. The two classifiers we use are Support Vector Machines (SVM) and Localized Generalized Matrix Learning Vector Quantization (LGMLVQ). Performing machine learning on fMRI connectivity measures is popularly known as multi-voxel pattern analysis (MVPA). By performing such an analysis, we are interested in studying the impact HIV infection has on an individual's brain. The high area under receiver operating curve (AUC) and accuracy values for 100 different train/test separations using MCA-LM self-influence measures (SVM: mean AUC=0.86, LGMLVQ: mean AUC=0.88, SVM and LGMLVQ: mean accuracy=0.78) compared with standard MVPA analysis using cross-correlation between fMRI time-series (SVM: mean AUC=0.58, LGMLVQ: mean AUC=0.57), demonstrates that self-influence features can be more discriminative than measures of interaction between time-series pairs. Furthermore, our results suggest that incorporating measures of self-influence in MVPA analysis used commonly in fMRI analysis has the potential to provide a performance boost and indicate important changes in dynamics of regions in the brain as a consequence of HIV infection.
Sequence invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, S.; Manjunath, S.
1990-01-01
A synthesis method and new VLSI architecture are introduced to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. A design method is proposed that utilizes BTS logic to implement regular and dense circuits. A given state sequence can be programmed with power supply connections or dynamically reallocated if stored in a register. Arbitrary flow table sequences can be modified or programmed to dynamically alter the function of the machine. This allows VLSI controllers to be designed with the programmability of a general purpose processor but with the compact size and performance of dedicated logic.
Sequence-invariant state machines
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R.; Manjunath, Shamanna K.; Maki, Gary K.
1991-01-01
A synthesis method and an MOS VLSI architecture are presented to realize sequential circuits that have the ability to implement any state machine having N states and m inputs, regardless of the actual sequence specified in the flow table. The design method utilizes binary tree structured (BTS) logic to implement regular and dense circuits. The desired state sequence can be hardwired with power supply connections or can be dynamically reallocated if stored in a register. This allows programmable VLSI controllers to be designed with a compact size and performance approaching that of dedicated logic. Results of ICV implementations are reported and an example sequence-invariant state machine is contrasted with implementations based on traditional methods.
Parallel Computational Fluid Dynamics: Current Status and Future Requirements
NASA Technical Reports Server (NTRS)
Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)
1994-01-01
One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.
A Prototype Symbolic Model of Canonical Functional Neuroanatomy of the Motor System
Rubin, Daniel L.; Halle, Michael; Musen, Mark; Kikinis, Ron
2008-01-01
Recent advances in bioinformatics have opened entire new avenues for organizing, integrating and retrieving neuroscientific data, in a digital, machine-processable format, which can be at the same time understood by humans, using ontological, symbolic data representations. Declarative information stored in ontological format can be perused and maintained by domain experts, interpreted by machines, and serve as basis for a multitude of decision-support, computerized simulation, data mining, and teaching applications. We have developed a prototype symbolic model of canonical neuroanatomy of the motor system. Our symbolic model is intended to support symbolic lookup, logical inference and mathematical modeling by integrating descriptive, qualitative and quantitative functional neuroanatomical knowledge. Furthermore, we show how our approach can be extended to modeling impaired brain connectivity in disease states, such as common movement disorders. In developing our ontology, we adopted a disciplined modeling approach, relying on a set of declared principles, a high-level schema, Aristotelian definitions, and a frame-based authoring system. These features, along with the use of the Unified Medical Language System (UMLS) vocabulary, enable the alignment of our functional ontology with an existing comprehensive ontology of human anatomy, and thus allow for combining the structural and functional views of neuroanatomy for clinical decision support and neuroanatomy teaching applications. Although the scope of our current prototype ontology is limited to a particular functional system in the brain, it may be possible to adapt this approach for modeling other brain functional systems as well. PMID:18164666
Effective switching frequency multiplier inverter
Su, Gui-Jia [Oak Ridge, TN; Peng, Fang Z [Okemos, MI
2007-08-07
A switching frequency multiplier inverter for low inductance machines that uses parallel connection of switches and each switch is independently controlled according to a pulse width modulation scheme. The effective switching frequency is multiplied by the number of switches connected in parallel while each individual switch operates within its limit of switching frequency. This technique can also be used for other power converters such as DC/DC, AC/DC converters.
Mumtaz, Wajid; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Malik, Aamir Saeed
2018-02-01
Major depressive disorder (MDD), a debilitating mental illness, could cause functional disabilities and could become a social problem. An accurate and early diagnosis for depression could become challenging. This paper proposed a machine learning framework involving EEG-derived synchronization likelihood (SL) features as input data for automatic diagnosis of MDD. It was hypothesized that EEG-based SL features could discriminate MDD patients and healthy controls with an acceptable accuracy better than measures such as interhemispheric coherence and mutual information. In this work, classification models such as support vector machine (SVM), logistic regression (LR) and Naïve Bayesian (NB) were employed to model relationship between the EEG features and the study groups (MDD patient and healthy controls) and ultimately achieved discrimination of study participants. The results indicated that the classification rates were better than chance. More specifically, the study resulted into SVM classification accuracy = 98%, sensitivity = 99.9%, specificity = 95% and f-measure = 0.97; LR classification accuracy = 91.7%, sensitivity = 86.66%, specificity = 96.6% and f-measure = 0.90; NB classification accuracy = 93.6%, sensitivity = 100%, specificity = 87.9% and f-measure = 0.95. In conclusion, SL could be a promising method for diagnosing depression. The findings could be generalized to develop a robust CAD-based tool that may help for clinical purposes.
Hybrid BCI approach to control an artificial tibio-femoral joint.
Mercado, Luis; Rodriguez-Linan, Angel; Torres-Trevino, Luis M; Quiroz, G
2016-08-01
Brain-Computer Interfaces (BCIs) for disabled people should allow them to use their remaining functionalities as control possibilities. BCIs connect the brain with external devices to perform the volition or intent of movement, regardless if that individual is unable to perform the task due to body impairments. In this work we fuse electromyographic (EMG) with electroencephalographic (EEG) activity in a framework called "Hybrid-BCI" (hBCI) approach to control the movement of a simulated tibio-femoral joint. Two mathematical models of a tibio-femoral joint are used to emulate the kinematic and dynamic behavior of the knee. The interest is to reproduce different velocities of the human gait cycle. The EEG signals are used to classify the user intent, which are the velocity changes, meanwhile the superficial EMG signals are used to estimate the amplitude of such intent. A multi-level controller is used to solve the trajectory tracking problem involved. The lower level consists of an individual controller for each model, it solves the tracking of the desired trajectory even considering different velocities of the human gait cycle. The mid-level uses a combination of a logical operator and a finite state machine for the switching between models. Finally, the highest level consists in a support vector machine to classify the desired activity.
NASA Astrophysics Data System (ADS)
Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen
Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
Recognising discourse causality triggers in the biomedical domain.
Mihăilă, Claudiu; Ananiadou, Sophia
2013-12-01
Current domain-specific information extraction systems represent an important resource for biomedical researchers, who need to process vast amounts of knowledge in a short time. Automatic discourse causality recognition can further reduce their workload by suggesting possible causal connections and aiding in the curation of pathway models. We describe here an approach to the automatic identification of discourse causality triggers in the biomedical domain using machine learning. We create several baselines and experiment with and compare various parameter settings for three algorithms, i.e. Conditional Random Fields (CRF), Support Vector Machines (SVM) and Random Forests (RF). We also evaluate the impact of lexical, syntactic, and semantic features on each of the algorithms, showing that semantics improves the performance in all cases. We test our comprehensive feature set on two corpora containing gold standard annotations of causal relations, and demonstrate the need for more gold standard data. The best performance of 79.35% F-score is achieved by CRFs when using all three feature types.
A Superfluid Pulse Tube Refrigerator Without Moving Parts for Sub-Kelvin Cooling
NASA Technical Reports Server (NTRS)
Miller, Franklin K.
2012-01-01
A report describes a pulse tube refrigerator that uses a mixture of He-3 and superfluid He-4 to cool to temperatures below 300 mK, while rejecting heat at temperatures up to 1.7 K. The refrigerator is driven by a novel thermodynamically reversible pump that is capable of pumping the He-3 He-4 mixture without the need for moving parts. The refrigerator consists of a reversible thermal magnetic pump module, two warm heat exchangers, a recuperative heat exchanger, two cold heat exchangers, two pulse tubes, and an orifice. It is two superfluid pulse tubes that run 180 out of phase. All components of this machine except the reversible thermal pump have been demonstrated at least as proof-of-concept physical models in previous superfluid Stirling cycle machines. The pump consists of two canisters packed with pieces of gadolinium gallium garnet (GGG). The canisters are connected by a superleak (a porous piece of VYCOR glass). A superconducting magnetic coil surrounds each of the canisters.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Research and realization of key technology in HILS interactive system
NASA Astrophysics Data System (ADS)
Liu, Che; Lu, Huiming; Wang, Fankai
2018-03-01
This paper designed HILS (Hardware In the Loop Simulation) interactive system based on xPC platform . Through the interface between C++ and MATLAB engine, establish the seamless data connection between Simulink and interactive system, complete data interaction between system and Simulink, realize the function development of model configuration, parameter modification and off line simulation. We establish the data communication between host and target machine through TCP/IP protocol to realize the model download and real-time simulation. Use database to store simulation data, implement real-time simulation monitoring and simulation data management. Realize system function integration by Qt graphic interface library and dynamic link library. At last, take the typical control system as an example to verify the feasibility of HILS interactive system.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Virtual network computing: cross-platform remote display and collaboration software.
Konerding, D E
1999-04-01
VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.
Deep Learning: A Primer for Radiologists.
Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An
2017-01-01
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.
Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa
2018-04-01
A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.
The Statistical Basis of Chemical Equilibria.
ERIC Educational Resources Information Center
Hauptmann, Siegfried; Menger, Eva
1978-01-01
Describes a machine which demonstrates the statistical bases of chemical equilibrium, and in doing so conveys insight into the connections among statistical mechanics, quantum mechanics, Maxwell Boltzmann statistics, statistical thermodynamics, and transition state theory. (GA)
Retinal hemorrhage detection by rule-based and machine learning approach.
Di Xiao; Shuang Yu; Vignarajan, Janardhan; Dong An; Mei-Ling Tay-Kearney; Kanagasingam, Yogi
2017-07-01
Robust detection of hemorrhages (HMs) in color fundus image is important in an automatic diabetic retinopathy grading system. Detection of the hemorrhages that are close to or connected with retinal blood vessels was found to be challenge. However, most methods didn't put research on it, even some of them mentioned this issue. In this paper, we proposed a novel hemorrhage detection method based on rule-based and machine learning methods. We focused on the improvement of detection of the hemorrhages that are close to or connected with retinal blood vessels, besides detecting the independent hemorrhage regions. A preliminary test for detecting HM presence was conducted on the images from two databases. We achieved sensitivity and specificity of 93.3% and 88% as well as 91.9% and 85.6% on the two datasets.
NASA Technical Reports Server (NTRS)
Smedal, Harald A.; Havill, C. Dewey
1962-01-01
A TIME-HONORED system of recording medical histories and the data obtained on physical and laboratory examination has been that of writing the information on record sheets that go into a folder for each patient. In order to have information which would be more readily retrieved, 'a program was initiated in 1952 by the U. S. Naval School of Aviation Medicine in connection with their "Care of the Flyer" study to place this information on machine record cards. In 1958, a machine record card method was developed for recording medical data in connection with the astronaut selection program. Machine record cards were also developed by the Aero Medical Laboratory, Wright-Patterson AFB, Ohio, and the Aviation Medical Acceleration Laboratory, Naval Air Development Center, Johnsville, Pennsylvania, for use in connection with a variety of tests including acceleration stress.1 Therefore, a variety of systems resulted in which data of a medical nature could easily be recalled. During the NASA, Ames Research Center centrifuge studies/'S the pilot subjects were interviewed after each centrifuge run, or series of runs, and subjective information was recorded in a log book by the usual history taking methods referred to above. After the methods Were reviewed, it' was recognized that a card system would be very useful in recording data from our pilots after they had been exposed to acceleration stress. Since the acceleration stress cards already developed did not meet our requirements, it was decided a different card was needed.
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies. PMID:29075430
Gonzalez, Enrique; Peña, Raul; Avila, Alfonso; Vargas-Rosales, Cesar; Munoz-Rodriguez, David
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.
Evaluating the Potential of Commercial GIS for Accelerator Configuration Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Larrieu; Y.R. Roblin; K. White
2005-10-10
The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less
Fault-Tolerant, Real-Time, Multi-Core Computer System
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2012-01-01
A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.
Stavrakas, Vassilis; Melas, Ioannis N; Sakellaropoulos, Theodore; Alexopoulos, Leonidas G
2015-01-01
Modeling of signal transduction pathways is instrumental for understanding cells' function. People have been tackling modeling of signaling pathways in order to accurately represent the signaling events inside cells' biochemical microenvironment in a way meaningful for scientists in a biological field. In this article, we propose a method to interrogate such pathways in order to produce cell-specific signaling models. We integrate available prior knowledge of protein connectivity, in a form of a Prior Knowledge Network (PKN) with phosphoproteomic data to construct predictive models of the protein connectivity of the interrogated cell type. Several computational methodologies focusing on pathways' logic modeling using optimization formulations or machine learning algorithms have been published on this front over the past few years. Here, we introduce a light and fast approach that uses a breadth-first traversal of the graph to identify the shortest pathways and score proteins in the PKN, fitting the dependencies extracted from the experimental design. The pathways are then combined through a heuristic formulation to produce a final topology handling inconsistencies between the PKN and the experimental scenarios. Our results show that the algorithm we developed is efficient and accurate for the construction of medium and large scale signaling networks. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGF/TNFA stimulation against made up experimental data. To avoid the possibility of erroneous predictions, we performed a cross-validation analysis. Finally, we validate that the introduced approach generates predictive topologies, comparable to the ILP formulation. Overall, an efficient approach based on graph theory is presented herein to interrogate protein-protein interaction networks and to provide meaningful biological insights.
NASA Astrophysics Data System (ADS)
Belqorchi, Abdelghafour
Forty years after Watson and Manchur conducted the Stand-Still Frequency Response (SSFR) test on a large turbogenerator, the applicability of this technic on a powerful salient pole synchronous generator has yet to be confirmed. The scientific literature on the subject is rare and very few have attempted to compare SSFR parameter results with those deduced by classical tests. The validity of SSFR on large salient pole machines has still to be proven. The present work aims in participating to fill this knowledge gap. It can be used to build a database of measurements highly needed to draw the validity of the technic. Also, the author hopes to demonstrate the potential of SSFR model to represent the machine, not only in cases of weak disturbances but also strong ones such as instantaneous three-phase short-circuit faults. The difficulties raised by previous searchers are: The lack of accuracy in very low frequency measurements; The difficulty in rotor positioning, according to d and q axes, in case of salient pole machines; The measurement current level influence on magnetizing inductances, in axes-d and; The rotation impact on damper circuits for some rotors design. Aware of the above difficulties, the author conducted an SSFR test on a large salient pole machine (285 MVA). The generator under test has laminated non isolated rotor and an integral slot number. The damper windings in adjacent poles are connected together, via the polar core and the rotor rim. Finally, the damping circuit is unaffected by rotation. To improve the measurement accuracy, in very low frequencies, the most precise frequency response analyser available on the market was used. Besides, the frequency responses of the signals conditioning modules (i.e., isolation, amplification...) were accounted for to correct the four measured SSFR transfer functions. Immunization against noise and use of instrumentation in their optimum range, were other technics rigorously applied. Magnetizing inductances, being influenced by the measurement current magnitude, the latter was maintained constant in the range 1mHz-20Hz. Other problems such as the rotation impact on damper circuits or the difficulty of rotor positioning are eliminated or attenuated by the intrinsic characteristics of the machine. Regarding the data analysis, the Maximum Likelihood Estimation (MLE) method was used to determine the third and second order equivalent circuit from SSFR measurements. In d-axis, the approaches of adjustment to two and three transfer functions (Ld(s), sG(s) and Lafo(s)) were explored. The second order model, derived from (Ld( s) and G(s)), was used to deduce the machine standard parameters. The latter were compared with the values given by the manufacturer and by conventional on-site tests: Instantaneous three-phase short-circuit, Dalton-Cameron and the d-axis transient time constant at open stator (T'do). The comparison showed the good accuracy of SSFR values. Subsequently, a machine model was built in EMTP-RV based on SSFR standard parameters. The model was able to reproduce stator and rotor currents measured during instantaneous three-phase short-circuit test. Some adjustments, to SSFR parameters, were needed to reproduce stator voltage and rotor current acquired during load rejection d-axis test. It is worthwhile noting that the load rejection d-axis test, recently added to IEEE 115-2009 annex, must be modified to take into account the saturation and excitation impedance impact on deduced parameters. Regarding this issue, some suggestions are proposed by the author. The obtained SSFR results, contribute to raise confidence on SSFR application on large salient pole machines. In addition, it shows the aptitude of the SSFR model to represent the machine in both cases of weak and strong disturbances, at least on machines similar the one studied. Index Terms: Salient pole, frequency response, SSFR, equivalent circuit, operational inductance.
Machine learning phases of matter
NASA Astrophysics Data System (ADS)
Carrasquilla, Juan; Melko, Roger G.
2017-02-01
Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the `curse of dimensionality' commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo.
Informatics in radiology: an information model of the DICOM standard.
Kahn, Charles E; Langlotz, Curtis P; Channin, David S; Rubin, Daniel L
2011-01-01
The Digital Imaging and Communications in Medicine (DICOM) Standard is a key foundational technology for radiology. However, its complexity creates challenges for information system developers because the current DICOM specification requires human interpretation and is subject to nonstandard implementation. To address this problem, a formally sound and computationally accessible information model of the DICOM Standard was created. The DICOM Standard was modeled as an ontology, a machine-accessible and human-interpretable representation that may be viewed and manipulated by information-modeling tools. The DICOM Ontology includes a real-world model and a DICOM entity model. The real-world model describes patients, studies, images, and other features of medical imaging. The DICOM entity model describes connections between real-world entities and the classes that model the corresponding DICOM information entities. The DICOM Ontology was created to support the Cancer Biomedical Informatics Grid (caBIG) initiative, and it may be extended to encompass the entire DICOM Standard and serve as a foundation of medical imaging systems for research and patient care. RSNA, 2010
Chaotic behaviour of Zeeman machines at introductory course of mechanics
NASA Astrophysics Data System (ADS)
Nagy, Péter; Tasnádi, Péter
2016-05-01
Investigation of chaotic motions and cooperative systems offers a magnificent opportunity to involve modern physics into the basic course of mechanics taught to engineering students. In the present paper it will be demonstrated that Zeeman Machine can be a versatile and motivating tool for students to get introductory knowledge about chaotic motion via interactive simulations. It works in a relatively simple way and its properties can be understood very easily. Since the machine can be built easily and the simulation of its movement is also simple the experimental investigation and the theoretical description can be connected intuitively. Although Zeeman Machine is known mainly for its quasi-static and catastrophic behaviour, its dynamic properties are also of interest with its typical chaotic features. By means of a periodically driven Zeeman Machine a wide range of chaotic properties of the simple systems can be demonstrated such as bifurcation diagrams, chaotic attractors, transient chaos and so on. The main goal of this paper is the presentation of an interactive learning material for teaching the basic features of the chaotic systems through the investigation of the Zeeman Machine.
Automated solar module assembly line
NASA Technical Reports Server (NTRS)
Bycer, M.
1980-01-01
The solar module assembly machine which Kulicke and Soffa delivered under this contract is a cell tabbing and stringing machine, and capable of handling a variety of cells and assembling strings up to 4 feet long which then can be placed into a module array up to 2 feet by 4 feet in a series of parallel arrangement, and in a straight or interdigitated array format. The machine cycle is 5 seconds per solar cell. This machine is primarily adapted to 3 inch diameter round cells with two tabs between cells. Pulsed heat is used as the bond technique for solar cell interconnects. The solar module assembly machine unloads solar cells from a cassette, automatically orients them, applies flux and solders interconnect ribbons onto the cells. It then inverts the tabbed cells, connects them into cell strings, and delivers them into a module array format using a track mounted vacuum lance, from which they are taken to test and cleaning benches prior to final encapsulation into finished solar modules. Throughout the machine the solar cell is handled very carefully, and any contact with the collector side of the cell is avoided or minimized.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Programmable phase plate for tool modification in laser machining applications
Thompson Jr., Charles A.; Kartz, Michael W.; Brase, James M.; Pennington, Deanna; Perry, Michael D.
2004-04-06
A system for laser machining includes a laser source for propagating a laser beam toward a target location, and a spatial light modulator having individual controllable elements capable of modifying a phase profile of the laser beam to produce a corresponding irradiance pattern on the target location. The system also includes a controller operably connected to the spatial light modulator for controlling the individual controllable elements. By controlling the individual controllable elements, the phase profile of the laser beam may be modified into a desired phase profile so as to produce a corresponding desired irradiance pattern on the target location capable of performing a machining operation on the target location.
Walking robot: A design project for undergraduate students
NASA Technical Reports Server (NTRS)
1990-01-01
The design and construction of the University of Maryland walking machine was completed during the 1989 to 1990 academic year. It was required that the machine be capable of completing a number of tasks including walking a straight line, turning to change direction, and manuevering over an obstacle such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear box and crank arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating this machine about this support. The machine can be controlled by using either a user-operated remote tether or the onboard computer for the execution of control commands. Absolute encoders are attached to all motors to provide the control computer with information regarding the status of the motors. Long and short range infrared sensors provide the computer with feedback information regarding the machine's position relative to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.
Park, Ji Eun; Park, Bumwoo; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun
2017-01-01
Objective To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Materials and Methods Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Results Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Conclusion Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease. PMID:29089831
Permanent magnet machine with windings having strand transposition
Qu, Ronghai; Jansen, Patrick Lee
2009-04-21
This document discusses, among other things, a stator with transposition between the windings or coils. The coils are free from transposition to increase the fill factor of the stator slots. The transposition at the end connections between an inner coil and an outer coil provide transposition to reduce circulating current loss. The increased fill factor reduces further current losses. Such a stator is used in a dual rotor, permanent magnet machine, for example, in a compressor pump, wind turbine gearbox, wind turbine rotor.
Background Equatorial Astronomical Measurements Focal Plane Assembly (Refurbished HI STAR SOUTH)
1984-09-01
Subassembly RPT41412 MOSFETs during assembly and test. The old and new designs are shown in Fig- ure 7. The copper webs between the first and second and...machined in the remaining webs be- tween the detector recesses and through a small hole drilled through the frame to connect the traces of all four...gold wirebond routed through a notch machined in the frame web between one of the detector recesses and the board recess. The sap- phire support
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules
Chowdhury, Debashish
2013-01-01
A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505
29 CFR 1910.253 - Oxygen-fuel gas welding and cutting.
Code of Federal Regulations, 2011 CFR
2011-07-01
... threads to which permanent connections are to be made, such as to a machine. (vii) Station outlets shall... materials or processes. (6) Outside generator houses and inside generator rooms for stationary acetylene...
29 CFR 1910.253 - Oxygen-fuel gas welding and cutting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... threads to which permanent connections are to be made, such as to a machine. (vii) Station outlets shall... materials or processes. (6) Outside generator houses and inside generator rooms for stationary acetylene...
29 CFR 1910.253 - Oxygen-fuel gas welding and cutting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... threads to which permanent connections are to be made, such as to a machine. (vii) Station outlets shall... materials or processes. (6) Outside generator houses and inside generator rooms for stationary acetylene...
29 CFR 1910.253 - Oxygen-fuel gas welding and cutting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... threads to which permanent connections are to be made, such as to a machine. (vii) Station outlets shall... materials or processes. (6) Outside generator houses and inside generator rooms for stationary acetylene...
Geng, Xiangfei; Xu, Junhai; Liu, Baolin; Shi, Yonggang
2018-01-01
Major depressive disorder (MDD) is a mental disorder characterized by at least 2 weeks of low mood, which is present across most situations. Diagnosis of MDD using rest-state functional magnetic resonance imaging (fMRI) data faces many challenges due to the high dimensionality, small samples, noisy and individual variability. To our best knowledge, no studies aim at classification with effective connectivity and functional connectivity measures between MDD patients and healthy controls. In this study, we performed a data-driving classification analysis using the whole brain connectivity measures which included the functional connectivity from two brain templates and effective connectivity measures created by the default mode network (DMN), dorsal attention network (DAN), frontal-parietal network (FPN), and silence network (SN). Effective connectivity measures were extracted using spectral Dynamic Causal Modeling (spDCM) and transformed into a vectorial feature space. Linear Support Vector Machine (linear SVM), non-linear SVM, k-Nearest Neighbor (KNN), and Logistic Regression (LR) were used as the classifiers to identify the differences between MDD patients and healthy controls. Our results showed that the highest accuracy achieved 91.67% (p < 0.0001) when using 19 effective connections and 89.36% when using 6,650 functional connections. The functional connections with high discriminative power were mainly located within or across the whole brain resting-state networks while the discriminative effective connections located in several specific regions, such as posterior cingulate cortex (PCC), ventromedial prefrontal cortex (vmPFC), dorsal cingulate cortex (dACC), and inferior parietal lobes (IPL). To further compare the discriminative power of functional connections and effective connections, a classification analysis only using the functional connections from those four networks was conducted and the highest accuracy achieved 78.33% (p < 0.0001). Our study demonstrated that the effective connectivity measures might play a more important role than functional connectivity in exploring the alterations between patients and health controls and afford a better mechanistic interpretability. Moreover, our results showed a diagnostic potential of the effective connectivity for the diagnosis of MDD patients with high accuracies allowing for earlier prevention or intervention. PMID:29515348
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Elhenawy, Mohammed; Jahangiri, Arash; Rakha, Hesham A; El-Shawarby, Ihab
2015-10-01
The ability to model driver stop/run behavior at signalized intersections considering the roadway surface condition is critical in the design of advanced driver assistance systems. Such systems can reduce intersection crashes and fatalities by predicting driver stop/run behavior. The research presented in this paper uses data collected from two controlled field experiments on the Smart Road at the Virginia Tech Transportation Institute (VTTI) to model driver stop/run behavior at the onset of a yellow indication for different roadway surface conditions. The paper offers two contributions. First, it introduces a new predictor related to driver aggressiveness and demonstrates that this measure enhances the modeling of driver stop/run behavior. Second, it applies well-known artificial intelligence techniques including: adaptive boosting (AdaBoost), random forest, and support vector machine (SVM) algorithms as well as traditional logistic regression techniques on the data in order to develop a model that can be used by traffic signal controllers to predict driver stop/run decisions in a connected vehicle environment. The research demonstrates that by adding the proposed driver aggressiveness predictor to the model, there is a statistically significant increase in the model accuracy. Moreover the false alarm rate is significantly reduced but this reduction is not statistically significant. The study demonstrates that, for the subject data, the SVM machine learning algorithm performs the best in terms of optimum classification accuracy and false positive rates. However, the SVM model produces the best performance in terms of the classification accuracy only. Copyright © 2015 Elsevier Ltd. All rights reserved.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
Conformation-controlled binding kinetics of antibodies
NASA Astrophysics Data System (ADS)
Galanti, Marta; Fanelli, Duccio; Piazza, Francesco
2016-01-01
Antibodies are large, extremely flexible molecules, whose internal dynamics is certainly key to their astounding ability to bind antigens of all sizes, from small hormones to giant viruses. In this paper, we build a shape-based coarse-grained model of IgG molecules and show that it can be used to generate 3D conformations in agreement with single-molecule Cryo-Electron Tomography data. Furthermore, we elaborate a theoretical model that can be solved exactly to compute the binding rate constant of a small antigen to an IgG in a prescribed 3D conformation. Our model shows that the antigen binding process is tightly related to the internal dynamics of the IgG. Our findings pave the way for further investigation of the subtle connection between the dynamics and the function of large, flexible multi-valent molecular machines.
Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model
NASA Astrophysics Data System (ADS)
Manderla, M.; Kiniger, K.; Koutnik, J.
2014-03-01
Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.
Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H
2015-01-01
Recently, there has been considerable interest in understanding brain networks in major depressive disorder (MDD). Neural pathways can be tracked in the living brain using diffusion-weighted imaging (DWI); graph theory can then be used to study properties of the resulting fiber networks. To date, global abnormalities have not been reported in tractography-based graph metrics in MDD, so we used a machine learning approach based on "support vector machines" to differentiate depressed from healthy individuals based on multiple brain network properties. We also assessed how important specific graph metrics were for this differentiation. Finally, we conducted a local graph analysis to identify abnormal connectivity at specific nodes of the network. We were able to classify depression using whole-brain graph metrics. Small-worldness was the most useful graph metric for classification. The right pars orbitalis, right inferior parietal cortex, and left rostral anterior cingulate all showed abnormal network connectivity in MDD. This is the first use of structural global graph metrics to classify depressed individuals. These findings highlight the importance of future research to understand network properties in depression across imaging modalities, improve classification results, and relate network alterations to psychiatric symptoms, medication, and comorbidities.
Yun, Ruijuan; Lin, Chung-Chih; Wu, Shuicai; Huang, Chu-Chung; Lin, Ching-Po; Chao, Yi-Ping
2013-01-01
In this study, we employed diffusion tensor imaging (DTI) to construct brain structural network and then derive the connection matrices from 96 healthy elderly subjects. The correlation analysis between these topological properties of network based on graph theory and the Cognitive Abilities Screening Instrument (CASI) index were processed to extract the significant network characteristics. These characteristics were then integrated to estimate the models by various machine-learning algorithms to predict user's cognitive performance. From the results, linear regression model and Gaussian processes model showed presented better abilities with lower mean absolute errors of 5.8120 and 6.25 to predict the cognitive performance respectively. Moreover, these extracted topological properties of brain structural network derived from DTI also could be regarded as the bio-signatures for further evaluation of brain degeneration in healthy aged and early diagnosis of mild cognitive impairment (MCI).
Healing relationships and the existential philosophy of Martin Buber
Scott, John G; Scott, Rebecca G; Miller, William L; Stange, Kurt C; Crabtree, Benjamin F
2009-01-01
The dominant unspoken philosophical basis of medical care in the United States is a form of Cartesian reductionism that views the body as a machine and medical professionals as technicians whose job is to repair that machine. The purpose of this paper is to advocate for an alternative philosophy of medicine based on the concept of healing relationships between clinicians and patients. This is accomplished first by exploring the ethical and philosophical work of Pellegrino and Thomasma and then by connecting Martin Buber's philosophical work on the nature of relationships to an empirically derived model of the medical healing relationship. The Healing Relationship Model was developed by the authors through qualitative analysis of interviews of physicians and patients. Clinician-patient healing relationships are a special form of what Buber calls I-Thou relationships, characterized by dialog and mutuality, but a mutuality limited by the inherent asymmetry of the clinician-patient relationship. The Healing Relationship Model identifies three processes necessary for such relationships to develop and be sustained: Valuing, Appreciating Power and Abiding. We explore in detail how these processes, as well as other components of the model resonate with Buber's concepts of I-Thou and I-It relationships. The resulting combined conceptual model illuminates the wholeness underlying the dual roles of clinicians as healers and providers of technical biomedicine. On the basis of our analysis, we argue that health care should be focused on healing, with I-Thou relationships at its core. PMID:19678950
Zeno, Helios A; Buitrago, Renan L; Sternberger, Sidney S; Patt, Marisa E; Tovar, Nick; Coelho, Paulo; Kurtz, Kenneth S; Tuminelli, Frank J
2016-04-01
To compare the removal of torque values of machined implant abutment connections (internal and external) with and without soft tissue entrapment using an in vitro model. Thirty external- and 30 internal-connection implants were embedded in urethane dimethacrylate. Porcine tissue was prepared and measured to thicknesses of 0.5 and 1.0 mm. Six groups (n = 10) were studied: External- and internal-connection implants with no tissue (control), 0.5, and 1.0 mm of tissue were entrapped at the implant/abutment interface. Abutments were inserted to 20 Ncm for all six groups. Insertion torque values were recorded using a digital torque gauge. All groups were then immersed in 1 M NaOH for 48 hours to dissolve tissue. Subsequent reverse torque measurements were recorded. Mean and standard deviation were determined for each group, and one-way ANOVA and Bonferroni test were used for statistical analysis. All 60 specimens achieved a 20-Ncm insertion torque, despite tissue entrapment. Reverse torque measurements for external connection displayed a statistically significant difference (p < 0.05) between all groups with mean reverse torque values for the control (13.71 ± 1.4 Ncm), 0.5 mm (7.83 ± 2.4 Ncm), and 1.0 mm tissue entrapment (2.29 ± 1.4 Ncm) groups. Some statistically significant differences (p < 0.05) were found between internal-connection groups. In all specimens, tissue did not completely dissolve after 48 hours. External-connection implants were significantly affected by tissue entrapment; the thicker the tissue, the lower the reverse torque values noted. Internal-connection implants were less affected by tissue entrapment. © 2015 by the American College of Prosthodontists.
Schwarz, Frank; Hegewald, Andrea; Becker, Jürgen
2014-01-01
Objectives To address the following focused question: What is the impact of implant–abutment configuration and the positioning of the machined collar/microgap on crestal bone level changes? Material and methods Electronic databases of the PubMed and the Web of Knowledge were searched for animal and human studies reporting on histological/radiological crestal bone level changes (CBL) at nonsubmerged one-/two-piece implants (placed in healed ridges) exhibiting different abutment configurations, positioning of the machined collar/microgap (between 1992 and November 2012: n = 318 titles). Quality assessment of selected full-text articles was performed according to the ARRIVE and CONSORT statement guidelines. Results A total of 13 publications (risk of bias: high) were eligible for the review. The weighted mean difference (WMD) (95% CI) between machined collars placed either above or below the bone crest amounted to 0.835 mm favoring an epicrestal positioning of the rough/smooth border (P < 0.001) (P-value for heterogeneity: 0.885, I2: 0.000% = no heterogeneity). WMD (95% CI) between microgaps placed either at or below the bone crest amounted to −0.479 mm favoring a subcrestal position of the implant neck (P < 0.001) (P-value for heterogeneity: 0.333, I2: 12.404% = low heterogeneity). Only two studies compared different implant–abutment configurations. Due to a high heterogeneity, a meta-analysis was not feasible. Conclusions While the positioning of the machined neck and microgap may limit crestal bone level changes at nonsubmerged implants, the impact of the implant–abutment connection lacks documentation. PMID:23782338
Snack food as a modulator of human resting-state functional connectivity.
Mendez-Torrijos, Andrea; Kreitz, Silke; Ivan, Claudiu; Konerth, Laura; Rösch, Julie; Pischetsrieder, Monika; Moll, Gunther; Kratz, Oliver; Dörfler, Arnd; Horndasch, Stefanie; Hess, Andreas
2018-04-04
To elucidate the mechanisms of how snack foods may induce non-homeostatic food intake, we used resting state functional magnetic resonance imaging (fMRI), as resting state networks can individually adapt to experience after short time exposures. In addition, we used graph theoretical analysis together with machine learning techniques (support vector machine) to identifying biomarkers that can categorize between high-caloric (potato chips) vs. low-caloric (zucchini) food stimulation. Seventeen healthy human subjects with body mass index (BMI) 19 to 27 underwent 2 different fMRI sessions where an initial resting state scan was acquired, followed by visual presentation of different images of potato chips and zucchini. There was then a 5-minute pause to ingest food (day 1=potato chips, day 3=zucchini), followed by a second resting state scan. fMRI data were further analyzed using graph theory analysis and support vector machine techniques. Potato chips vs. zucchini stimulation led to significant connectivity changes. The support vector machine was able to accurately categorize the 2 types of food stimuli with 100% accuracy. Visual, auditory, and somatosensory structures, as well as thalamus, insula, and basal ganglia were found to be important for food classification. After potato chips consumption, the BMI was associated with the path length and degree in nucleus accumbens, middle temporal gyrus, and thalamus. The results suggest that high vs. low caloric food stimulation in healthy individuals can induce significant changes in resting state networks. These changes can be detected using graph theory measures in conjunction with support vector machine. Additionally, we found that the BMI affects the response of the nucleus accumbens when high caloric food is consumed.
Neural-Network Quantum States, String-Bond States, and Chiral Topological States
NASA Astrophysics Data System (ADS)
Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio
2018-01-01
Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.
Large-scale automated histology in the pursuit of connectomes.
Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert
2011-11-09
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
Large-Scale Automated Histology in the Pursuit of Connectomes
Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert
2011-01-01
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... attempted until the pressure has been relieved from that part of the system to be repaired. (c) At no time... used at connections to machines of high-pressure hose lines of 1-inch inside diameter or larger, and...
30 CFR 77.412 - Compressed air systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... attempted until the pressure has been relieved from that part of the system to be repaired. (c) At no time... used at connections to machines of high-pressure hose lines of 1-inch inside diameter or larger, and...
Data parallel sorting for particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1992-01-01
Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.
Mutual Authentication Scheme in Secure Internet of Things Technology for Comfortable Lifestyle.
Park, Namje; Kang, Namhi
2015-12-24
The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, "things" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.
Use of laser drilling in the manufacture of organic inverter circuits.
Iba, Shingo; Kato, Yusaku; Sekitani, Tsuyoshi; Kawaguchi, Hiroshi; Sakurai, Takayasu; Someya, Takao
2006-01-01
Inverter circuits have been made by connecting two high-quality pentacene field-effect transistors. A uniform and pinhole-free 900 nm thick polyimide gate-insulating layer was formed on a flexible polyimide film with gold gate electrodes and partially removed by using a CO2 laser drilling machine to make via holes and contact holes. Subsequent evaporation of the gold layer results in good electrical connection with a gold gate layer underneath the gate-insulating layer. By optimization of the settings of the CO2 laser drilling machine, contact resistance can be reduced to as low as 3 ohms for 180 microm square electrodes. No degradation of the transport properties of the organic transistors was observed after the laser-drilling process. This study demonstrates the feasibility of using the laser drilling process for implementation of organic transistors in integrated circuits on flexible polymer films.
Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.
Chowdhury, Debashish
2013-06-04
A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Machine Learning Toolkit for Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-03-31
Support Vector Machines (SVM) is a popular machine learning technique, which has been applied to a wide range of domains such as science, finance, and social networks for supervised learning. MaTEx undertakes the challenge of designing a scalable parallel SVM training algorithm for large scale systems, which includes commodity multi-core machines, tightly connected supercomputers and cloud computing systems. Several techniques are proposed for improved speed and memory space usage including adaptive and aggressive elimination of samples for faster convergence , and sparse format representation of data samples. Several heuristics for earliest possible to lazy elimination of non-contributing samples are consideredmore » in MaTEx. In many cases, where an early sample elimination might result in a false positive, low overhead mechanisms for reconstruction of key data structures are proposed. The proposed algorithm and heuristics are implemented and evaluated on various publicly available datasets« less
Control system for, and a method of, heating an operator station of a work machine
Baker, Thomas M.; Hoff, Brian D.; Akasam, Sivaprasad
2005-04-05
There are situations in which an operator remains in an operator station of a work machine when an engine of the work machine is inactive. The present invention includes a control system for, and a method of, heating the operator station when the engine is inactive. A heating system of the work machine includes an electrically-powered coolant pump, a power source, and at least one piece of warmed machinery. An operator heat controller is moveable between a first and a second position, and is operable to connect the electrically-powered coolant pump to the power source when the engine is inactive and the operator heat controller is in the first position. Thus, by deactivating the engine and then moving the operator heat controller to the first position, the operator may supply electrical energy to the electrically-powered coolant pump, which is operably coupled to heat the operator station.
NASA Astrophysics Data System (ADS)
Rückwardt, M.; Göpfert, A.; Schnellhorn, M.; Correns, M.; Rosenberger, M.; Linß, G.
2010-07-01
Precise measuring of spectacle frames is an important field of quality assurance for opticians and their customers. Different supplier and a number of measuring methods are available but all of them are tactile ones. In this paper the possible employment of optical coordinate measuring machines is discussed for detecting the groove of a spectacle frame. The ambient conditions like deviation and measuring time are even multifaceted like quantity of quality characteristics and measuring objects itself and have to be tested. But the main challenge for an optical coordinate measuring machine is the blocked optical path, because the device under test is located behind an undercut. In this case it is necessary to deflect the beam of the machine for example with a rotating plane mirror. In the next step the difficulties of machine vision connecting to the spectacle frame are explained. Finally first results are given.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Huang, Jen-Ching; Weng, Yung-Jin
2014-01-01
This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.
Mete, Mutlu; Sakoglu, Unal; Spence, Jeffrey S; Devous, Michael D; Harris, Thomas S; Adinoff, Bryon
2016-10-06
Neuroimaging studies have yielded significant advances in the understanding of neural processes relevant to the development and persistence of addiction. However, these advances have not explored extensively for diagnostic accuracy in human subjects. The aim of this study was to develop a statistical approach, using a machine learning framework, to correctly classify brain images of cocaine-dependent participants and healthy controls. In this study, a framework suitable for educing potential brain regions that differed between the two groups was developed and implemented. Single Photon Emission Computerized Tomography (SPECT) images obtained during rest or a saline infusion in three cohorts of 2-4 week abstinent cocaine-dependent participants (n = 93) and healthy controls (n = 69) were used to develop a classification model. An information theoretic-based feature selection algorithm was first conducted to reduce the number of voxels. A density-based clustering algorithm was then used to form spatially connected voxel clouds in three-dimensional space. A statistical classifier, Support Vectors Machine (SVM), was then used for participant classification. Statistically insignificant voxels of spatially connected brain regions were removed iteratively and classification accuracy was reported through the iterations. The voxel-based analysis identified 1,500 spatially connected voxels in 30 distinct clusters after a grid search in SVM parameters. Participants were successfully classified with 0.88 and 0.89 F-measure accuracies in 10-fold cross validation (10xCV) and leave-one-out (LOO) approaches, respectively. Sensitivity and specificity were 0.90 and 0.89 for LOO; 0.83 and 0.83 for 10xCV. Many of the 30 selected clusters are highly relevant to the addictive process, including regions relevant to cognitive control, default mode network related self-referential thought, behavioral inhibition, and contextual memories. Relative hyperactivity and hypoactivity of regional cerebral blood flow in brain regions in cocaine-dependent participants are presented with corresponding level of significance. The SVM-based approach successfully classified cocaine-dependent and healthy control participants using voxels selected with information theoretic-based and statistical methods from participants' SPECT data. The regions found in this study align with brain regions reported in the literature. These findings support the future use of brain imaging and SVM-based classifier in the diagnosis of substance use disorders and furthering an understanding of their underlying pathology.
PANDA: A distributed multiprocessor operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubb, P.
1989-01-01
PANDA is a design for a distributed multiprocessor and an operating system. PANDA is designed to allow easy expansion of both hardware and software. As such, the PANDA kernel provides only message passing and memory and process management. The other features needed for the system (device drivers, secondary storage management, etc.) are provided as replaceable user tasks. The thesis presents PANDA's design and implementation, both hardware and software. PANDA uses multiple 68010 processors sharing memory on a VME bus, each such node potentially connected to others via a high speed network. The machine is completely homogeneous: there are no differencesmore » between processors that are detectable by programs running on the machine. A single two-processor node has been constructed. Each processor contains memory management circuits designed to allow processors to share page tables safely. PANDA presents a programmers' model similar to the hardware model: a job is divided into multiple tasks, each having its own address space. Within each task, multiple processes share code and data. Tasks can send messages to each other, and set up virtual circuits between themselves. Peripheral devices such as disc drives are represented within PANDA by tasks. PANDA divides secondary storage into volumes, each volume being accessed by a volume access task, or VAT. All knowledge about the way that data is stored on a disc is kept in its volume's VAT. The design is such that PANDA should provide a useful testbed for file systems and device drivers, as these can be installed without recompiling PANDA itself, and without rebooting the machine.« less
Streamlining machine learning in mobile devices for remote sensing
NASA Astrophysics Data System (ADS)
Coronel, Andrei D.; Estuar, Ma. Regina E.; Garcia, Kyle Kristopher P.; Dela Cruz, Bon Lemuel T.; Torrijos, Jose Emmanuel; Lim, Hadrian Paulo M.; Abu, Patricia Angela R.; Victorino, John Noel C.
2017-09-01
Mobile devices have been at the forefront of Intelligent Farming because of its ubiquitous nature. Applications on precision farming have been developed on smartphones to allow small farms to monitor environmental parameters surrounding crops. Mobile devices are used for most of these applications, collecting data to be sent to the cloud for storage, analysis, modeling and visualization. However, with the issue of weak and intermittent connectivity in geographically challenged areas of the Philippines, the solution is to provide analysis on the phone itself. Given this, the farmer gets a real time response after data submission. Though Machine Learning is promising, hardware constraints in mobile devices limit the computational capabilities, making model development on the phone restricted and challenging. This study discusses the development of a Machine Learning based mobile application using OpenCV libraries. The objective is to enable the detection of Fusarium oxysporum cubense (Foc) in juvenile and asymptomatic bananas using images of plant parts and microscopic samples as input. Image datasets of attached, unattached, dorsal, and ventral views of leaves were acquired through sampling protocols. Images of raw and stained specimens from soil surrounding the plant, and sap from the plant resulted to stained and unstained samples respectively. Segmentation and feature extraction techniques were applied to all images. Initial findings show no significant differences among the different feature extraction techniques. For differentiating infected from non-infected leaves, KNN yields highest average accuracy, as opposed to Naive Bayes and SVM. For microscopic images using MSER feature extraction, KNN has been tested as having a better accuracy than SVM or Naive-Bayes.
Zou, Meng; Liu, Zhaoqi; Zhang, Xiang-Sun; Wang, Yong
2015-10-15
In prognosis and survival studies, an important goal is to identify multi-biomarker panels with predictive power using molecular characteristics or clinical observations. Such analysis is often challenged by censored, small-sample-size, but high-dimensional genomic profiles or clinical data. Therefore, sophisticated models and algorithms are in pressing need. In this study, we propose a novel Area Under Curve (AUC) optimization method for multi-biomarker panel identification named Nearest Centroid Classifier for AUC optimization (NCC-AUC). Our method is motived by the connection between AUC score for classification accuracy evaluation and Harrell's concordance index in survival analysis. This connection allows us to convert the survival time regression problem to a binary classification problem. Then an optimization model is formulated to directly maximize AUC and meanwhile minimize the number of selected features to construct a predictor in the nearest centroid classifier framework. NCC-AUC shows its great performance by validating both in genomic data of breast cancer and clinical data of stage IB Non-Small-Cell Lung Cancer (NSCLC). For the genomic data, NCC-AUC outperforms Support Vector Machine (SVM) and Support Vector Machine-based Recursive Feature Elimination (SVM-RFE) in classification accuracy. It tends to select a multi-biomarker panel with low average redundancy and enriched biological meanings. Also NCC-AUC is more significant in separation of low and high risk cohorts than widely used Cox model (Cox proportional-hazards regression model) and L1-Cox model (L1 penalized in Cox model). These performance gains of NCC-AUC are quite robust across 5 subtypes of breast cancer. Further in an independent clinical data, NCC-AUC outperforms SVM and SVM-RFE in predictive accuracy and is consistently better than Cox model and L1-Cox model in grouping patients into high and low risk categories. In summary, NCC-AUC provides a rigorous optimization framework to systematically reveal multi-biomarker panel from genomic and clinical data. It can serve as a useful tool to identify prognostic biomarkers for survival analysis. NCC-AUC is available at http://doc.aporc.org/wiki/NCC-AUC. ywang@amss.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Deep learning based state recognition of substation switches
NASA Astrophysics Data System (ADS)
Wang, Jin
2018-06-01
Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.
Goswami, R; Dufort, P; Tartaglia, M C; Green, R E; Crawley, A; Tator, C H; Wennberg, R; Mikulis, D J; Keightley, M; Davis, Karen D
2016-05-01
The frontotemporal cortical network is associated with behaviours such as impulsivity and aggression. The health of the uncinate fasciculus (UF) that connects the orbitofrontal cortex (OFC) with the anterior temporal lobe (ATL) may be a crucial determinant of behavioural regulation. Behavioural changes can emerge after repeated concussion and thus we used MRI to examine the UF and connected gray matter as it relates to impulsivity and aggression in retired professional football players who had sustained multiple concussions. Behaviourally, athletes had faster reaction times and an increased error rate on a go/no-go task, and increased aggression and mania compared to controls. MRI revealed that the athletes had (1) cortical thinning of the ATL, (2) negative correlations of OFC thickness with aggression and task errors, indicative of impulsivity, (3) negative correlations of UF axial diffusivity with error rates and aggression, and (4) elevated resting-state functional connectivity between the ATL and OFC. Using machine learning, we found that UF diffusion imaging differentiates athletes from healthy controls with significant classifiers based on UF mean and radial diffusivity showing 79-84 % sensitivity and specificity, and 0.8 areas under the ROC curves. The spatial pattern of classifier weights revealed hot spots at the orbitofrontal and temporal ends of the UF. These data implicate the UF system in the pathological outcomes of repeated concussion as they relate to impulsive behaviour. Furthermore, a support vector machine has potential utility in the general assessment and diagnosis of brain abnormalities following concussion.
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Study of a non-equilibrium plasma pinch with application for microwave generation
NASA Astrophysics Data System (ADS)
Al Agry, Ahmad Farouk
The Non-Equilibrium Plasma Pinch (NEPP), also known as the Dense Plasma Focus (DPF) is well known as a source of energetic ions, relativistic electrons and neutrons as well as electromagnetic radiation extending from the infrared to X-ray. In this dissertation, the operation of a 15 kJ, Mather type, NEPP machine is studied in detail. A large number of experiments are carried out to tune the machine parameters for best performance using helium and hydrogen as filling gases. The NEPP machine is modified to be able to extract the copious number of electrons generated at the pinch. A hollow anode with small hole at the flat end, and a mock magnetron without biasing magnetic field are built. The electrons generated at the pinch are very difficult to capture, therefore a novel device is built to capture and transport the electrons from the pinch to the magnetron. The novel cup-rod-needle device successfully serves the purpose to capture and transport electrons to monitor the pinch current. Further, the device has the potential to field emit charges from its needle end acting as a pulsed electron source for other devices such as the magnetron. Diagnostics tools are designed, modeled, built, calibrated, and implemented in the machine to measure the pinch dynamics. A novel, UNLV patented electromagnetic dot sensors are successfully calibrated, and implemented in the machine. A new calibration technique is developed and test stands designed and built to measure the dot's ability to track the impetus signal over its dynamic range starting and ending in the noise region. The patented EM-dot sensor shows superior performance over traditional electromagnetic sensors, such as Rogowski coils. On the other hand, the cup-rod structure, when grounded on the rod side, serves as a diagnostic tool to monitor the pinch current by sampling the actual current, a quantity that has been always very challenging to measure without perturbing the pinch. To the best of our knowledge, this method of measuring the pinch current is unique and has never been done before. Agreement with other models is shown. The operation of the NEPP machine with the hole in the center of the anode and the magnetron connected including the cup-rod structure is examined against the NEPP machine signature with solid anode. Both cases showed excellent agreement. This suggests that the existence of the hole and the diagnostic tool inside the anode have negligible effects on the pinch.
Aerodynamic seal assemblies for turbo-machinery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bidkar, Rahul Anil; Wolfe, Christopher; Fang, Biao
2015-09-29
The present application provides an aerodynamic seal assembly for use with a turbo-machine. The aerodynamic seal assembly may include a number of springs, a shoe connected to the springs, and a secondary seal positioned about the springs and the shoe.
30 CFR 77.1304 - Blasting agents; special provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... connection with pneumatic loading machines shall be of the semiconductive type, having a total resistance low... electric currents to a safe level. Wire-countered hose shall not be used because of the potential hazard from stray electric currents. ...
30 CFR 77.1304 - Blasting agents; special provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... connection with pneumatic loading machines shall be of the semiconductive type, having a total resistance low... electric currents to a safe level. Wire-countered hose shall not be used because of the potential hazard from stray electric currents. ...
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Investigation of approximate models of experimental temperature characteristics of machines
NASA Astrophysics Data System (ADS)
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Phacoemulsification tip vacuum pressure: Comparison of 4 devices.
Payne, Marielle; Georgescu, Dan; Waite, Aaron N; Olson, Randall J
2006-08-01
To determine the vacuum pressure generated by 4 phacoemulsification devices measured at the phacoemulsification tip. University ophthalmology department. The effective vacuum pressures generated by the Sovereign (AMO), Millennium (Bausch & Lomb), Legacy AdvanTec (Alcon Laboratories), and Infiniti (Alcon Laboratories) phacoemulsification machines were measured with a device that isolated the phacoemulsification tip in a chamber connected to a pressure gauge. The 4 machines were tested at multiple vacuum limit settings, and the values were recorded after the foot pedal was fully depressed and the pressure had stabilized. The AdvanTec and Infiniti machines were tested with and without occlusion of the Aspiration Bypass System (ABS) side port (Alcon Laboratories). The Millennium machine was tested using venturi and peristaltic pumps. The machines generated pressures close to the expected at maximum vacuum settings between 100 mm Hg and 500 mm Hg with few intermachine variations. There was no significant difference between pressures generated using 19- or 20-gauge tips (Millennium and Sovereign). The addition of an ABS side port decreased vacuum by a mean of 12.1% (P < .0001). Although there were some variations in vacuum pressures among phacoemulsification machines, particularly when an aspiration bypass tip was used, these discrepancies are probably not clinically significant.
University of Maryland walking robot: A design project for undergraduate students
NASA Technical Reports Server (NTRS)
Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom
1990-01-01
The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.
Extreme ultraviolet lithography machine
Tichenor, Daniel A.; Kubiak, Glenn D.; Haney, Steven J.; Sweeney, Donald W.
2000-01-01
An extreme ultraviolet lithography (EUVL) machine or system for producing integrated circuit (IC) components, such as transistors, formed on a substrate. The EUVL machine utilizes a laser plasma point source directed via an optical arrangement onto a mask or reticle which is reflected by a multiple mirror system onto the substrate or target. The EUVL machine operates in the 10-14 nm wavelength soft x-ray photon. Basically the EUV machine includes an evacuated source chamber, an evacuated main or project chamber interconnected by a transport tube arrangement, wherein a laser beam is directed into a plasma generator which produces an illumination beam which is directed by optics from the source chamber through the connecting tube, into the projection chamber, and onto the reticle or mask, from which a patterned beam is reflected by optics in a projection optics (PO) box mounted in the main or projection chamber onto the substrate. In one embodiment of a EUVL machine, nine optical components are utilized, with four of the optical components located in the PO box. The main or projection chamber includes vibration isolators for the PO box and a vibration isolator mounting for the substrate, with the main or projection chamber being mounted on a support structure and being isolated.
A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken
This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology,more » comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)« less
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
Computational Nanotechnology of Materials, Devices, and Machines: Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Kwak, Dolhan (Technical Monitor)
2000-01-01
The mechanics and chemistry of carbon nanotubes have relevance for their numerous electronic applications. Mechanical deformations such as bending and twisting affect the nanotube's conductive properties, and at the same time they possess high strength and elasticity. Two principal techniques were utilized including the analysis of large scale classical molecular dynamics on a shared memory architecture machine and a quantum molecular dynamics methodology. In carbon based electronics, nanotubes are used as molecular wires with topological defects which are mediated through various means. Nanotubes can be connected to form junctions.
Performance of quantum cloning and deleting machines over coherence
NASA Astrophysics Data System (ADS)
Karmakar, Sumana; Sen, Ajoy; Sarkar, Debasis
2017-10-01
Coherence, being at the heart of interference phenomena, is found to be an useful resource in quantum information theory. Here we want to understand quantum coherence under the combination of two fundamentally dual processes, viz., cloning and deleting. We found the role of quantum cloning and deletion machines with the consumption and generation of quantum coherence. We establish cloning as a cohering process and deletion as a decohering process. Fidelity of the process will be shown to have connection with coherence generation and consumption of the processes.
Shuck, A.B.
1958-04-01
A device is described that is specifically designed to cast uraniumn fuel rods in a vacuunn, in order to obtain flawless, nonoxidized castings which subsequently require a maximum of machining or wastage of the expensive processed material. A chamber surrounded with heating elements is connected to the molds, and the entire apparatus is housed in an airtight container. A charge of uranium is placed in the chamber, heated, then is allowed to flow into the molds While being rotated. Water circulating through passages in the molds chills the casting to form a fine grained fuel rod in nearly finished form.
Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab
2014-11-01
Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.
Energy optimization for a wind DFIG with flywheel energy storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamzaoui, Ihssen, E-mail: hamzaoui-ihssen2000@yahoo.fr; Laboratory of Instrumentation, Faculty of Electronics and Computer, University of Khemis Miliana, Ain Defla; Bouchafaa, Farid, E-mail: fbouchafa@gmail.com
2016-07-25
The type of distributed generation unit that is the subject of this paper relates to renewable energy sources, especially wind power. The wind generator used is based on a double fed induction Generator (DFIG). The stator of the DFIG is connected directly to the network and the rotor is connected to the network through the power converter with three levels. The objective of this work is to study the association a Flywheel Energy Storage System (FESS) in wind generator. This system is used to improve the quality of electricity provided by wind generator. It is composed of a flywheel; anmore » induction machine (IM) and a power electronic converter. A maximum power tracking technique « Maximum Power Point Tracking » (MPPT) and a strategy for controlling the pitch angle is presented. The model of the complete system is developed in Matlab/Simulink environment / to analyze the results from simulation the integration of wind chain to networks.« less
Sardiu, Mihaela E; Gilmore, Joshua M; Carrozza, Michael J; Li, Bing; Workman, Jerry L; Florens, Laurence; Washburn, Michael P
2009-10-06
Protein complexes are key molecular machines executing a variety of essential cellular processes. Despite the availability of genome-wide protein-protein interaction studies, determining the connectivity between proteins within a complex remains a major challenge. Here we demonstrate a method that is able to predict the relationship of proteins within a stable protein complex. We employed a combination of computational approaches and a systematic collection of quantitative proteomics data from wild-type and deletion strain purifications to build a quantitative deletion-interaction network map and subsequently convert the resulting data into an interdependency-interaction model of a complex. We applied this approach to a data set generated from components of the Saccharomyces cerevisiae Rpd3 histone deacetylase complexes, which consists of two distinct small and large complexes that are held together by a module consisting of Rpd3, Sin3 and Ume1. The resulting representation reveals new protein-protein interactions and new submodule relationships, providing novel information for mapping the functional organization of a complex.
Disruptions of network connectivity predict impairment in multiple behavioral domains after stroke
Ramsey, Lenny E.; Metcalf, Nicholas V.; Chacko, Ravi V.; Weinberger, Kilian; Baldassarre, Antonello; Hacker, Carl D.; Shulman, Gordon L.; Corbetta, Maurizio
2016-01-01
Deficits following stroke are classically attributed to focal damage, but recent evidence suggests a key role of distributed brain network disruption. We measured resting functional connectivity (FC), lesion topography, and behavior in multiple domains (attention, visual memory, verbal memory, language, motor, and visual) in a cohort of 132 stroke patients, and used machine-learning models to predict neurological impairment in individual subjects. We found that visual memory and verbal memory were better predicted by FC, whereas visual and motor impairments were better predicted by lesion topography. Attention and language deficits were well predicted by both. Next, we identified a general pattern of physiological network dysfunction consisting of decrease of interhemispheric integration and intrahemispheric segregation, which strongly related to behavioral impairment in multiple domains. Network-specific patterns of dysfunction predicted specific behavioral deficits, and loss of interhemispheric communication across a set of regions was associated with impairment across multiple behavioral domains. These results link key organizational features of brain networks to brain–behavior relationships in stroke. PMID:27402738
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
A compositional reservoir simulator on distributed memory parallel computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rame, M.; Delshad, M.
1995-12-31
This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less
... heavily for at least 30 minutes before the test. ■■ Do not wear tight clothing that makes it difficult for you ... be blowing into a tube connected to a machine (spirometer). To get the “best” test result, the test is repeated three times. You ...
NASA Astrophysics Data System (ADS)
Bailly, J. S.; Delenne, C.; Chahinian, N.; Bringay, S.; Commandré, B.; Chaumont, M.; Derras, M.; Deruelle, L.; Roche, M.; Rodriguez, F.; Subsol, G.; Teisseire, M.
2017-12-01
In France, local government institutions must establish a detailed description of wastewater networks. The information should be available, but it remains fragmented (different formats held by different stakeholders) and incomplete. In the "Cart'Eaux" project, a multidisciplinary team, including an industrial partner, develops a global methodology using Machine Learning and Data Mining approaches applied to various types of large data to recover information in the aim of mapping urban sewage systems for hydraulic modelling. Deep-learning is first applied using a Convolution Neural Network to localize manhole covers on 5 cm resolution aerial RGB images. The detected manhole covers are then automatically connected using a tree-shaped graph constrained by industry rules. Based on a Delaunay triangulation, connections are chosen to minimize a cost function depending on pipe length, slope and possible intersection with roads or buildings. A stochastic version of this algorithm is currently being developed to account for positional uncertainty and detection errors, and generate sets of probable networks. As more information is required for hydraulic modeling (slopes, diameters, materials, etc.), text data mining is used to extract network characteristics from data posted on the Web or available through governmental or specific databases. Using an appropriate list of keywords, the web is scoured for documents which are saved in text format. The thematic entities are identified and linked to the surrounding spatial and temporal entities. The methodology is developed and tested on two towns in southern France. The primary results are encouraging: 54% of manhole covers are detected with few false detections, enabling the reconstruction of probable networks. The data mining results are still being investigated. It is clear at this stage that getting numerical values on specific pipes will be challenging. Thus, when no information is found, decision rules will be used to assign admissible numerical values to enable the final hydraulic modelling. Consequently, sensitivity analysis of the hydraulic model will be performed to take into account the uncertainty associated with each piece of information. Project funded by the European Regional Development Fund and the Occitanie Region.
Large space structures fabrication experiment. [on-orbit fabrication of graphite/thermoplastic beams
NASA Technical Reports Server (NTRS)
1978-01-01
The fabrication machine used for the rolltrusion and on-orbit forming of graphite thermoplastic (CTP) strip material into structural sections is described. The basic process was analytically developed parallel with, and integrated into the conceptual design of, a flight experiment machine for producing a continuous triangular cross section truss. The machine and its associated ancillary equipment are mounted on a Space Lab pallet. Power, thermal control, and instrumentation connections are made during ground installation. Observation, monitoring, caution and warning, and control panels and displays are installed at the payload specialist station in the orbiter. The machine is primed before flight by initiation of beam forming, to include attachment of the first set of cross members and anchoring of the diagonal cords. Control of the experiment will be from the orbiter mission specialist station. Normal operation is by automatic processing control software. Machine operating data are displayed and recorded on the ground. Data is processed and formatted to show progress of the major experiment parameters including stable operation, physical symmetry, joint integrity, and structural properties.
Intelligence-Augmented Rat Cyborgs in Maze Solving.
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.
Intelligence-Augmented Rat Cyborgs in Maze Solving
Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui
2016-01-01
Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains. PMID:26859299
Supercomputers ready for use as discovery machines for neuroscience.
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Equivalence of restricted Boltzmann machines and tensor network states
NASA Astrophysics Data System (ADS)
Chen, Jing; Cheng, Song; Xie, Haidong; Wang, Lei; Xiang, Tao
2018-02-01
The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross fertilize both deep learning and quantum many-body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex data sets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations.
Supercomputers Ready for Use as Discovery Machines for Neuroscience
Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus
2012-01-01
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience. PMID:23129998
Precise positioning method for multi-process connecting based on binocular vision
NASA Astrophysics Data System (ADS)
Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan
2016-01-01
With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.
Resting-State Functional Connectivity Underlying Costly Punishment: A Machine-Learning Approach.
Feng, Chunliang; Zhu, Zhiyuan; Gu, Ruolei; Wu, Xia; Luo, Yue-Jia; Krueger, Frank
2018-06-08
A large number of studies have demonstrated costly punishment to unfair events across human societies. However, individuals exhibit a large heterogeneity in costly punishment decisions, whereas the neuropsychological substrates underlying the heterogeneity remain poorly understood. Here, we addressed this issue by applying a multivariate machine-learning approach to compare topological properties of resting-state brain networks as a potential neuromarker between individuals exhibiting different punishment propensities. A linear support vector machine classifier obtained an accuracy of 74.19% employing the features derived from resting-state brain networks to distinguish two groups of individuals with different punishment tendencies. Importantly, the most discriminative features that contributed to the classification were those regions frequently implicated in costly punishment decisions, including dorsal anterior cingulate cortex (dACC) and putamen (salience network), dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (mentalizing network), and lateral prefrontal cortex (central-executive network). These networks are previously implicated in encoding norm violation and intentions of others and integrating this information for punishment decisions. Our findings thus demonstrated that resting-state functional connectivity (RSFC) provides a promising neuromarker of social preferences, and bolster the assertion that human costly punishment behaviors emerge from interactions among multiple neural systems. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
A design of the u-health monitoring system using a Nintendo DS game machine.
Lee, Sangjoon; Kim, Jinkwon; Kim, Jungkuk; Lee, Myoungho
2009-01-01
In this paper, we used the hand held type a Nintendo DS Game Machine for consisting of a u-Health Monitoring system. This system is consists of four parts. Biosignal acquire device is the first. The Second is a wireless sensor network device. The third is a wireless base-station for connecting internet network. Displaying units are the last part which were a personal computer and a Nintendo DS game machine. The bio-signal measurement device among the four parts the u-health monitoring system can acquire 7-channels data which have 3-channels ECG(Electrocardiogram), 3-axis accelerometer and tilting sensor data. Acquired data connect up the internet network throughout the wireless sensor network and a base-station. In the experiment, we concurrently display the bio-signals on to a monitor of personal computer and LCD of a Nintendo DS using wireless internet protocol and those monitoring devices placed off to the one side an office building. The result of the experiment, this proposed system effectively can transmit patient's biosignal data as a long time and a long distance. This suggestion of the u-health monitoring system need to operate in the ambulance, general hospitals and geriatric institutions as a u-health monitoring device.
Decomposition of the compound Atwood machine
NASA Astrophysics Data System (ADS)
Lopes Coelho, R.
2017-11-01
Non-standard solving strategies for the compound Atwood machine problem have been proposed. The present strategy is based on a very simple idea. Taking an Atwood machine and replacing one of its bodies by another Atwood machine, we have a compound machine. As this operation can be repeated, we can construct any compound Atwood machine. This rule of construction is transferred to a mathematical model, whereby the equations of motion are obtained. The only difference between the machine and its model is that instead of pulleys and bodies, we have reference frames that move solidarily with these objects. This model provides us with the accelerations in the non-inertial frames of the bodies, which we will use to obtain the equations of motion. This approach to the problem will be justified by the Lagrange method and exemplified by machines with six and eight bodies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
Task-focused modeling in automated agriculture
NASA Astrophysics Data System (ADS)
Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack
1993-01-01
Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.
Ji, Xiaonan; Yen, Po-Yin
2015-08-31
Systematic reviews and their implementation in practice provide high quality evidence for clinical practice but are both time and labor intensive due to the large number of articles. Automatic text classification has proven to be instrumental in identifying relevant articles for systematic reviews. Existing approaches use machine learning model training to generate classification algorithms for the article screening process but have limitations. We applied a network approach to assist in the article screening process for systematic reviews using predetermined article relationships (similarity). The article similarity metric is calculated using the MEDLINE elements title (TI), abstract (AB), medical subject heading (MH), author (AU), and publication type (PT). We used an article network to illustrate the concept of article relationships. Using the concept, each article can be modeled as a node in the network and the relationship between 2 articles is modeled as an edge connecting them. The purpose of our study was to use the article relationship to facilitate an interactive article recommendation process. We used 15 completed systematic reviews produced by the Drug Effectiveness Review Project and demonstrated the use of article networks to assist article recommendation. We evaluated the predictive performance of MEDLINE elements and compared our approach with existing machine learning model training approaches. The performance was measured by work saved over sampling at 95% recall (WSS95) and the F-measure (F1). We also used repeated analysis over variance and Hommel's multiple comparison adjustment to demonstrate statistical evidence. We found that although there is no significant difference across elements (except AU), TI and AB have better predictive capability in general. Collaborative elements bring performance improvement in both F1 and WSS95. With our approach, a simple combination of TI+AB+PT could achieve a WSS95 performance of 37%, which is competitive to traditional machine learning model training approaches (23%-41% WSS95). We demonstrated a new approach to assist in labor intensive systematic reviews. Predictive ability of different elements (both single and composited) was explored. Without using model training approaches, we established a generalizable method that can achieve a competitive performance.
Quick-connect threaded attachment joint
NASA Technical Reports Server (NTRS)
Lucy, M. H.; Messick, W. R.; Vasquez, P.
1979-01-01
Joint is self-aligning and tightens with only sixty-five degrees of rotation for quick connects and disconnects. Made of injection-molded plastics or cast or machined aluminum, joint can carry wires, tubes, liquids, or gases. When two parts of joint are brought together, their shapes align them. Small projections on male section and slots on female section further aid alignment; slight rotation of male form engages projections in slots. At this point, threads engage and male section is rotated until joint is fully engaged.
Human factors model concerning the man-machine interface of mining crewstations
NASA Technical Reports Server (NTRS)
Rider, James P.; Unger, Richard L.
1989-01-01
The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portmann, Greg; /LBL, Berkeley; Safranek, James
The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less
Mysql Data-Base Applications for Dst-Like Physics Analysis
NASA Astrophysics Data System (ADS)
Tsenov, Roumen
2004-07-01
The data and analysis model developed and being used in the HARP experiment for studying hadron production at CERN Proton Synchrotron is discussed. Emphasis is put on usage of data-base (DB) back-ends for persistent storing and retrieving "alive" C++ objects encapsulating raw and reconstructed data. Concepts of "Data Summary Tape" (DST) as a logical collection of DB-persistent data of different types, and of "intermediate DST" (iDST) as a physical "tag" of DST, are introduced. iDST level of persistency allows a powerful, DST-level of analysis to be performed by applications running on an isolated machine (even laptop) with no connection to the experiment's main data storage. Implementation of these concepts is considered.
Coercivity of domain wall motion in thin films of amorphous rare earth-transition metal alloys
NASA Technical Reports Server (NTRS)
Mansuripur, M.; Giles, R. C.; Patterson, G.
1991-01-01
Computer simulations of a two dimensional lattice of magnetic dipoles are performed on the Connection Machine. The lattice is a discrete model for thin films of amorphous rare-earth transition metal alloys, which have application as the storage media in erasable optical data storage systems. In these simulations, the dipoles follow the dynamic Landau-Lifshitz-Gilbert equation under the influence of an effective field arising from local anisotropy, near-neighbor exchange, classical dipole-dipole interactions, and an externally applied field. Various sources of coercivity, such as defects and/or inhomogeneities in the lattice, are introduced and the subsequent motion of domain walls in response to external fields is investigated.
Generative Modeling for Machine Learning on the D-Wave
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thulasidasan, Sunil
These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.
Identifying patients with Alzheimer's disease using resting-state fMRI and graph theory.
Khazaee, Ali; Ebrahimzadeh, Ata; Babajani-Feremi, Abbas
2015-11-01
Study of brain network on the basis of resting-state functional magnetic resonance imaging (fMRI) has provided promising results to investigate changes in connectivity among different brain regions because of diseases. Graph theory can efficiently characterize different aspects of the brain network by calculating measures of integration and segregation. In this study, we combine graph theoretical approaches with advanced machine learning methods to study functional brain network alteration in patients with Alzheimer's disease (AD). Support vector machine (SVM) was used to explore the ability of graph measures in diagnosis of AD. We applied our method on the resting-state fMRI data of twenty patients with AD and twenty age and gender matched healthy subjects. The data were preprocessed and each subject's graph was constructed by parcellation of the whole brain into 90 distinct regions using the automated anatomical labeling (AAL) atlas. The graph measures were then calculated and used as the discriminating features. Extracted network-based features were fed to different feature selection algorithms to choose most significant features. In addition to the machine learning approach, statistical analysis was performed on connectivity matrices to find altered connectivity patterns in patients with AD. Using the selected features, we were able to accurately classify patients with AD from healthy subjects with accuracy of 100%. Results of this study show that pattern recognition and graph of brain network, on the basis of the resting state fMRI data, can efficiently assist in the diagnosis of AD. Classification based on the resting-state fMRI can be used as a non-invasive and automatic tool to diagnosis of Alzheimer's disease. Copyright © 2015 International Federation of Clinical Neurophysiology. All rights reserved.
NASA Astrophysics Data System (ADS)
Parsons, M. A.; Yarmey, L.; Dillo, I.
2017-12-01
Data are the foundation of a robust, efficient, and reproducible scientific enterprise. The Research Data Alliance (RDA) is a community-driven, action-oriented, virtual organization committed to enabling open sharing and reuse of data by building social and technical bridges. The international RDA community includes almost 6000 members bringing diverse perspectives, domain knowledge, and expertise to a common table for identification of common challenges and holistic solutions. RDA members work together to identify common interests and form exploratory Interest Groups and outcome-oriented Working Groups. Participants exchange knowledge, share discoveries, discuss barriers and potential solutions, articulate policies, and align standards to enhance and facilitate global data sharing within and across domains and communities. With activities defined and led by members, RDA groups have organically been addressing issues across the full research cycle with community-ratified Recommendations and other outputs that begin to create the components of a global, data-sharing infrastructure. This paper examines how multiple RDA Recommendations can be implemented together to improve data and information discoverability, accessibility, and interconnection by both people and machines. For instance, the Persistent Identifier Types can support moving data across platforms through the Data Description Registry Interoperability framework following the RDA/WDS Publishing Data Workflows model. The Scholix initiative connects scholarly literature and data across numerous stakeholders can draw on the Practical Policy best practices for machine-actionable data policies. Where appropriate, we use a case study approach built around several flagship data sets from the Deep Carbon Observatory to examine how multiple RDA Recommendations can be implemented in actual practice.
NASA Astrophysics Data System (ADS)
Wang, X.; Xu, L.
2018-04-01
One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
Hardware Acceleration of Adaptive Neural Algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Conrad D.
As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less
Homopolar machine for reversible energy storage and transfer systems
Stillwagon, Roy E.
1978-01-01
A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine.
Homopolar machine for reversible energy storage and transfer systems
Stillwagon, Roy E.
1981-01-01
A homopolar machine designed to operate as a generator and motor in reversibly storing and transferring energy between the machine and a magnetic load coil for a thermo-nuclear reactor. The machine rotor comprises hollow thin-walled cylinders or sleeves which form the basis of the system by utilizing substantially all of the rotor mass as a conductor thus making it possible to transfer substantially all the rotor kinetic energy electrically to the load coil in a highly economical and efficient manner. The rotor is divided into multiple separate cylinders or sleeves of modular design, connected in series and arranged to rotate in opposite directions but maintain the supply of current in a single direction to the machine terminals. A stator concentrically disposed around the sleeves consists of a hollow cylinder having a number of excitation coils each located radially outward from the ends of adjacent sleeves. Current collected at an end of each sleeve by sleeve slip rings and brushes is transferred through terminals to the magnetic load coil. Thereafter, electrical energy returned from the coil then flows through the machine which causes the sleeves to motor up to the desired speed in preparation for repetition of the cycle. To eliminate drag on the rotor between current pulses, the brush rigging is designed to lift brushes from all slip rings in the machine.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Synthesis of actual knowledge on machine-tool monitoring methods and equipment
NASA Astrophysics Data System (ADS)
Tanguy, J. C.
1988-06-01
Problems connected with the automatic supervision of production were studied. Many different automatic control devices are now able to identify defects in the tools, but the solutions proposed to detect optimal limits in the utilization of a tool are not satisfactory.
Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H
2014-04-01
Graph theory is increasingly used in the field of neuroscience to understand the large-scale network structure of the human brain. There is also considerable interest in applying machine learning techniques in clinical settings, for example, to make diagnoses or predict treatment outcomes. Here we used support-vector machines (SVMs), in conjunction with whole-brain tractography, to identify graph metrics that best differentiate individuals with Major Depressive Disorder (MDD) from nondepressed controls. To do this, we applied a novel feature-scoring procedure that incorporates iterative classifier performance to assess feature robustness. We found that small-worldness , a measure of the balance between global integration and local specialization, most reliably differentiated MDD from nondepressed individuals. Post-hoc regional analyses suggested that heightened connectivity of the subcallosal cingulate gyrus (SCG) in MDDs contributes to these differences. The current study provides a novel way to assess the robustness of classification features and reveals anomalies in large-scale neural networks in MDD.
Mutual Authentication Scheme in Secure Internet of Things Technology for Comfortable Lifestyle
Park, Namje; Kang, Namhi
2015-01-01
The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, “things” are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks. PMID:26712759
Dissecting psychiatric spectrum disorders by generative embedding☆☆☆
Brodersen, Kay H.; Deserno, Lorenz; Schlagenhauf, Florian; Lin, Zhihao; Penny, Will D.; Buhmann, Joachim M.; Stephan, Klaas E.
2013-01-01
This proof-of-concept study examines the feasibility of defining subgroups in psychiatric spectrum disorders by generative embedding, using dynamical system models which infer neuronal circuit mechanisms from neuroimaging data. To this end, we re-analysed an fMRI dataset of 41 patients diagnosed with schizophrenia and 42 healthy controls performing a numerical n-back working-memory task. In our generative-embedding approach, we used parameter estimates from a dynamic causal model (DCM) of a visual–parietal–prefrontal network to define a model-based feature space for the subsequent application of supervised and unsupervised learning techniques. First, using a linear support vector machine for classification, we were able to predict individual diagnostic labels significantly more accurately (78%) from DCM-based effective connectivity estimates than from functional connectivity between (62%) or local activity within the same regions (55%). Second, an unsupervised approach based on variational Bayesian Gaussian mixture modelling provided evidence for two clusters which mapped onto patients and controls with nearly the same accuracy (71%) as the supervised approach. Finally, when restricting the analysis only to the patients, Gaussian mixture modelling suggested the existence of three patient subgroups, each of which was characterised by a different architecture of the visual–parietal–prefrontal working-memory network. Critically, even though this analysis did not have access to information about the patients' clinical symptoms, the three neurophysiologically defined subgroups mapped onto three clinically distinct subgroups, distinguished by significant differences in negative symptom severity, as assessed on the Positive and Negative Syndrome Scale (PANSS). In summary, this study provides a concrete example of how psychiatric spectrum diseases may be split into subgroups that are defined in terms of neurophysiological mechanisms specified by a generative model of network dynamics such as DCM. The results corroborate our previous findings in stroke patients that generative embedding, compared to analyses of more conventional measures such as functional connectivity or regional activity, can significantly enhance both the interpretability and performance of computational approaches to clinical classification. PMID:24363992
[On machines and instruments (II): the world in the eye of the work of E. T. A. Hoffmann].
Montiel, L
2008-01-01
Continuing with the subject of the previous work, this article considers the whole series of problems connected to the question of vision provoked by the mere existence of the body of the automaton. The eyes of the android and, above all, the reactions aroused by looking at these human-shaped machines are the object of Hoffman's reflections, from a viewpoint apparently firmly set within the Goethean concept of the
Rubber hose surface defect detection system based on machine vision
NASA Astrophysics Data System (ADS)
Meng, Fanwu; Ren, Jingrui; Wang, Qi; Zhang, Teng
2018-01-01
As an important part of connecting engine, air filter, engine, cooling system and automobile air-conditioning system, automotive hose is widely used in automobile. Therefore, the determination of the surface quality of the hose is particularly important. This research is based on machine vision technology, using HALCON algorithm for the processing of the hose image, and identifying the surface defects of the hose. In order to improve the detection accuracy of visual system, this paper proposes a method to classify the defects to reduce misjudegment. The experimental results show that the method can detect surface defects accurately.
French wind generator systems. [as auxiliary power sources for electrical networks
NASA Technical Reports Server (NTRS)
Noel, J. M.
1973-01-01
The experimental design of a wind driven generator with a rated power of 800 kilovolt amperes and capable of being connected to the main electrical network is reported. The rotor is a three bladed propeller; each blade is twisted but the fixed pitch is adjustable. The asynchronous 800-kilovolt ampere generator is driven by the propeller through a gearbox. A dissipating resistor regulates the machine under no-load conditions. The first propeller on the machine lasted 18 months; replacement of the rigid propeller with a flexible structure resulted in breakdown due to flutter effects.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
The Efficacy of Machine Learning Programs for Navy Manpower Analysis
1993-03-01
This thesis investigated the efficacy of two machine learning programs for Navy manpower analysis. Two machine learning programs, AIM and IXL, were...to generate models from the two commercial machine learning programs. Using a held out sub-set of the data the capabilities of the three models were...partial effects. The author recommended further investigation of AIM’s capabilities, and testing in an operational environment.... Machine learning , AIM, IXL.
A comparison of machine learning and Bayesian modelling for molecular serotyping.
Newton, Richard; Wernisch, Lorenz
2017-08-11
Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Learning About Climate and Atmospheric Models Through Machine Learning
NASA Astrophysics Data System (ADS)
Lucas, D. D.
2017-12-01
From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
NASA Astrophysics Data System (ADS)
van der Linden, Joost H.; Narsilio, Guillermo A.; Tordesillas, Antoinette
2016-08-01
We present a data-driven framework to study the relationship between fluid flow at the macroscale and the internal pore structure, across the micro- and mesoscales, in porous, granular media. Sphere packings with varying particle size distribution and confining pressure are generated using the discrete element method. For each sample, a finite element analysis of the fluid flow is performed to compute the permeability. We construct a pore network and a particle contact network to quantify the connectivity of the pores and particles across the mesoscopic spatial scales. Machine learning techniques for feature selection are employed to identify sets of microstructural properties and multiscale complex network features that optimally characterize permeability. We find a linear correlation (in log-log scale) between permeability and the average closeness centrality of the weighted pore network. With the pore network links weighted by the local conductance, the average closeness centrality represents a multiscale measure of efficiency of flow through the pore network in terms of the mean geodesic distance (or shortest path) between all pore bodies in the pore network. Specifically, this study objectively quantifies a hypothesized link between high permeability and efficient shortest paths that thread through relatively large pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
Sato, João Ricardo; Biazoli, Claudinei Eduardo; Salum, Giovanni Abrahão; Gadelha, Ary; Crossley, Nicolas; Vieira, Gilson; Zugman, André; Picon, Felipe Almeida; Pan, Pedro Mario; Hoexter, Marcelo Queiroz; Amaro, Edson; Anés, Mauricio; Moura, Luciana Monteiro; Del'Aquilla, Marco Antonio Gomes; Mcguire, Philip; Rohde, Luis Augusto; Miguel, Euripedes Constantino; Jackowski, Andrea Parolin; Bressan, Rodrigo Affonseca
2018-03-01
One of the major challenges facing psychiatry is how to incorporate biological measures in the classification of mental health disorders. Many of these disorders affect brain development and its connectivity. In this study, we propose a novel method for assessing brain networks based on the combination of a graph theory measure (eigenvector centrality) and a one-class support vector machine (OC-SVM). We applied this approach to resting-state fMRI data from 622 children and adolescents. Eigenvector centrality (EVC) of nodes from positive- and negative-task networks were extracted from each subject and used as input to an OC-SVM to label individual brain networks as typical or atypical. We hypothesised that classification of these subjects regarding the pattern of brain connectivity would predict the level of psychopathology. Subjects with atypical brain network organisation had higher levels of psychopathology (p < 0.001). There was a greater EVC in the typical group at the bilateral posterior cingulate and bilateral posterior temporal cortices; and significant decreases in EVC at left temporal pole. The combination of graph theory methods and an OC-SVM is a promising method to characterise neurodevelopment, and may be useful to understand the deviations leading to mental disorders.
Impact of wind generator infed on dynamic performance of a power system
NASA Astrophysics Data System (ADS)
Alam, Md. Ahsanul
Wind energy is one of the most prominent sources of electrical energy in the years to come. A tendency to increase the amount of electricity generation from wind turbine can be observed in many countries. One of the major concerns related to the high penetration level of the wind energy into the existing power grid is its influence on power system dynamic performance. In this thesis, the impact of wind generation system on power system dynamic performance is investigated through detailed dynamic modeling of the entire wind generator system considering all the relevant components. Nonlinear and linear models of a single machine as well as multimachine wind-AC system have been derived. For the dynamic model of integrated wind-AC system, a general transformation matrix is determined for the transformation of machine and network quantities to a common reference frame. Both time-domain and frequency domain analyses on single machine and multimachine systems have been carried out. The considered multimachine systems are---A 4 machine 12 bus system, and 10 machine 39 bus New England system. Through eigenvalue analysis, impact of asynchronous wind system on overall network damping has been quantified and modes responsible for the instability have been identified. Over with a number of simulation studies it is observed that for a induction generator based wind generation system, the fixed capacitor located at the generator terminal cannot normally cater for the reactive power demand during the transient disturbances like wind gust and fault on the system. For weak network connection, system instability may be initiated because of induction generator terminal voltage collapse under certain disturbance conditions. Incorporation of dynamic reactive power compensation scheme through either variable susceptance control or static compensator (STATCOM) is found to improve the dynamic performance significantly. Further improvement in transient profile has been brought in by supporting STATCOM with bulk energy storage devices. Two types of energy storage system (ESS) have been considered---battery energy storage system, and supercapacitor based energy storage system. A decoupled P -- Q control strategy has been implemented on STATCOM/ESS. It is observed that wind generators when supported by STATCOM/ESS can achieve significant withstand capability in the presence of grid fault of reasonable duration. It experiences almost negligible rotor speed variation, maintains constant terminal voltage, and resumes delivery of smoothed (almost transient free) power to the grid immediately after the fault is cleared. Keywords: Wind energy, induction generator, dynamic performance of wind generators, energy storage system, decoupled P -- Q control, multimachine system.
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Mapping the convergent temporal epileptic network in left and right temporal lobe epilepsy.
Fang, Peng; An, Jie; Zeng, Ling-Li; Shen, Hui; Qiu, Shijun; Hu, Dewen
2017-02-03
Left and right mesial temporal lobe epilepsy (mTLE) with hippocampal sclerosis (HS) exhibits similar functional and clinical dysfunctions, such as depressive mood and emotional dysregulation, implying that the left and right mTLE may share a common network substrate. However, the convergent anatomical network disruption between the left and right HS remains largely uncharacterized. This study aimed to investigate whether the left and right mTLE share a similar anatomical network. We examined 43 (22 left, 21 right) mTLE patients with HS and 39 healthy controls using diffusion tensor imaging. Machine learning approaches were applied to extract the abnormal anatomical connectivity patterns in both the left and right mTLE. The left and right mTLE showed that 28 discriminating connections were exactly the same when compared to the controls. The same 28 connections showed high discriminating power in comparisons of the left mTLE versus controls (91.7%) and the right mTLE versus controls (90.0%); however, these connections failed to discriminate the left from the right mTLE. These discriminating connections, which were diminished both in the left and right mTLE, were primarily located in the limbic-frontal network, partially agreeing with the limbic-frontal dysregulation model of depression. These findings suggest that left and right mTLE share a convergent circuit, which may account for the mood and emotional deficits in mTLE and may suggest the neuropathological mechanisms underlying the comorbidity of depression and mTLE. Copyright © 2016. Published by Elsevier B.V.
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
Automation of energy demand forecasting
NASA Astrophysics Data System (ADS)
Siddique, Sanzad
Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.
Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models
2015-09-12
AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-11-1-0239 5c. PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
ERIC Educational Resources Information Center
Nickerson, Gord
1991-01-01
Describes the use and applications of the communications program Telenet for remote log-in, a basic interactive resource sharing service that enables users to connect to any machine on the Internet and conduct a session. The Virtual Terminal--the central component of Telenet--is also described, as well as problems with terminals, services…
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2015-12-01
Prognostics is much efficient to achieve zero-downtime performance, maximum productivity and proactive maintenance of machines. Prognostics intends to assess and predict the time evolution of machine health degradation so that machine failures can be predicted and prevented. A novel prognostics system is developed based on the data-model-fusion scheme using the Bayesian inference-based self-organizing map (SOM) and an integration of logistic regression (LR) and high-order particle filtering (HOPF). In this prognostics system, a baseline SOM is constructed to model the data distribution space of healthy machine under an assumption that predictable fault patterns are not available. Bayesian inference-based probability (BIP) derived from the baseline SOM is developed as a quantification indication of machine health degradation. BIP is capable of offering failure probability for the monitored machine, which has intuitionist explanation related to health degradation state. Based on those historic BIPs, the constructed LR and its modeling noise constitute a high-order Markov process (HOMP) to describe machine health propagation. HOPF is used to solve the HOMP estimation to predict the evolution of the machine health in the form of a probability density function (PDF). An on-line model update scheme is developed to adapt the Markov process changes to machine health dynamics quickly. The experimental results on a bearing test-bed illustrate the potential applications of the proposed system as an effective and simple tool for machine health prognostics.
Modeling and simulation of five-axis virtual machine based on NX
NASA Astrophysics Data System (ADS)
Li, Xiaoda; Zhan, Xianghui
2018-04-01
Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.
NASA Astrophysics Data System (ADS)
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
NASA Astrophysics Data System (ADS)
Hobson, Michael; Graff, Philip; Feroz, Farhan; Lasenby, Anthony
2014-05-01
Machine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, called SkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. The SkyNet and BAMBI packages, which are fully parallelised using MPI, are available at http://www.mrao.cam.ac.uk/software/.
Utilization of rotor kinetic energy storage for hybrid vehicles
Hsu, John S [Oak Ridge, TN
2011-05-03
A power system for a motor vehicle having an internal combustion engine, the power system comprises an electric machine (12) further comprising a first excitation source (47), a permanent magnet rotor (28) and a magnetic coupling rotor (26) spaced from the permanent magnet rotor and at least one second excitation source (43), the magnetic coupling rotor (26) also including a flywheel having an inertial mass to store kinetic energy during an initial acceleration to an operating speed; and wherein the first excitation source is electrically connected to the second excitation source for power cycling such that the flywheel rotor (26) exerts torque on the permanent magnet rotor (28) to assist braking and acceleration of the permanent magnet rotor (28) and consequently, the vehicle. An axial gap machine and a radial gap machine are disclosed and methods of the invention are also disclosed.
Active balance system and vibration balanced machine
NASA Technical Reports Server (NTRS)
White, Maurice A. (Inventor); Qiu, Songgang (Inventor); Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor)
2005-01-01
An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass.
Brier, Søren
2015-12-01
Central to the attempt to develop a biosemiotics has been the discussion of what it means to be scientific. In Marcello Barbieri's latest argument for leaving Peircean biosemiotics and creating an alternative code-biology the definition of what it means to be scientific plays a major role. For Barbieri "scientific knowledge is obtained by building machine-like models of what we observe in nature". Barbieri interestingly claims that - in combination with the empirical and experimental basis - mechanism is virtually equivalent to the scientific method. The consequences of this statement seem to be that the optimal type of knowledge science can produce about living system is to model them as machines. But the explicit goal of a Peircean semiotically based biosemiotics is (also) to model living systems as cognitive and communicative systems working on the basis of meaning and signification. These two concepts are not part of the mechanistic models of natural science today, not even of cognitive science. Barbieri tries to solve this problem by introducing a new concept of biological meaning that is separate from the Peircean biosemiotics and then add Peirce's semiotics on top. This article argues why this view is inconsistent on the grounds that Peirce's semiotic paradigm only gives meaning in its pragmaticist conception of a fallibilist view of science, which again is intrinsic connected to its non-mechanistic metaphysics of Tychism, Synechism and Agapism. The core of the biosemiotic enterprise is to establish another type of trans- and interdisciplinary wissenschaft than the received view of "science". Copyright © 2015. Published by Elsevier Ltd.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Vargas, Mara Ambrosina de O; Meyer, Dagmar Estermann
2005-06-01
This study discusses the human being-machine relationship in the process called "cyborgzation" of the nurse who works in intensive care, based on post-structuralist Cultural Studies and highlighting Haraway's concept of cyborg. In it, manuals used by nurses in Intensive Care Units have been examined as cultural texts. This cultural analysis tries to decode the various senses of "human" and "machine", with the aim of recognizing processes that turn nurses into cyborgs. The argument is that intensive care nurses fall into a process of "technology embodiment" that turns the body-professional into a hybrid that makes possible to disqualify, at the same time, notions such as machine and body "proper", since it is the hybridization between one and the other that counts there. Like cyborgs, intensive care nurses learn to "be with" the machine, and this connection limits the specificity of their actions. It is suggested that processes of "cyborgzation" such as this are useful for questioning - and to deal with in different ways - the senses of "human" and "humanity" that support a major part of knowledge/action in health.
Intelligent hearing aids: the next revolution.
Tao Zhang; Mustiere, Fred; Micheyl, Christophe
2016-08-01
The first revolution in hearing aids came from nonlinear amplification, which allows better compensation for both soft and loud sounds. The second revolution stemmed from the introduction of digital signal processing, which allows better programmability and more sophisticated algorithms. The third revolution in hearing aids is wireless, which allows seamless connectivity between a pair of hearing aids and with more and more external devices. Each revolution has fundamentally transformed hearing aids and pushed the entire industry forward significantly. Machine learning has received significant attention in recent years and has been applied in many other industries, e.g., robotics, speech recognition, genetics, and crowdsourcing. We argue that the next revolution in hearing aids is machine intelligence. In fact, this revolution is already quietly happening. We will review the development in at least three major areas: applications of machine learning in speech enhancement; applications of machine learning in individualization and customization of signal processing algorithms; applications of machine learning in improving the efficiency and effectiveness of clinical tests. With the advent of the internet of things, the above developments will accelerate. This revolution will bring patient satisfactions to a new level that has never been seen before.
Perspectives on Machine Learning for Classification of Schizotypy Using fMRI Data.
Madsen, Kristoffer H; Krohne, Laerke G; Cai, Xin-Lu; Wang, Yi; Chan, Raymond C K
2018-03-15
Functional magnetic resonance imaging is capable of estimating functional activation and connectivity in the human brain, and lately there has been increased interest in the use of these functional modalities combined with machine learning for identification of psychiatric traits. While these methods bear great potential for early diagnosis and better understanding of disease processes, there are wide ranges of processing choices and pitfalls that may severely hamper interpretation and generalization performance unless carefully considered. In this perspective article, we aim to motivate the use of machine learning schizotypy research. To this end, we describe common data processing steps while commenting on best practices and procedures. First, we introduce the important role of schizotypy to motivate the importance of reliable classification, and summarize existing machine learning literature on schizotypy. Then, we describe procedures for extraction of features based on fMRI data, including statistical parametric mapping, parcellation, complex network analysis, and decomposition methods, as well as classification with a special focus on support vector classification and deep learning. We provide more detailed descriptions and software as supplementary material. Finally, we present current challenges in machine learning for classification of schizotypy and comment on future trends and perspectives.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
State of the art in nuclear telerobotics: focus on the man/machine connection
NASA Astrophysics Data System (ADS)
Greaves, Amna E.
1995-12-01
The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.
Alternative Models of Service, Centralized Machine Operations. Phase II Report. Volume II.
ERIC Educational Resources Information Center
Technology Management Corp., Alexandria, VA.
A study was conducted to determine if the centralization of playback machine operations for the national free library program would be feasible, economical, and desirable. An alternative model of playback machine services was constructed and compared with existing network operations considering both cost and service. The alternative model was…
Single phase four pole/six pole motor
Kirschbaum, Herbert S.
1984-01-01
A single phase alternating current electric motor is provided with a main stator winding having two coil groups each including the series connection of three coils. These coil groups can be connected in series for six pole operation and in parallel for four pole operation. The coils are approximately equally spaced around the periphery of the machine but are not of equal numbers of turns. The two coil groups are identically wound and spaced 180 mechanical degrees apart. One coil of each group has more turns and a greater span than the other two coils.
Confabulation Based Sentence Completion for Machine Reading
2010-11-01
making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the...thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics...University Press, 1992. [2] H. Motoda and K. Yoshida, “Machine learning techniques to make computers easier to use,” Proceedings of the Fifteenth
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Numerical Simulation of Earth Pressure on Head Chamber of Shield Machine with FEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Shouju; Kang Chengang; Sun, Wei
2010-05-21
Model parameters of conditioned soils in head chamber of shield machine are determined based on tree-axial compression tests in laboratory. The loads acting on tunneling face are estimated according to static earth pressure principle. Based on Duncan-Chang nonlinear elastic constitutive model, the earth pressures on head chamber of shield machine are simulated in different aperture ratio cases for rotating cutterhead of shield machine. Relationship between pressure transportation factor and aperture ratio of shield machine is proposed by using aggression analysis.
On-Line Scheduling of Parallel Machines
1990-11-01
machine without losing any work; this is referred to as the preemptive model. In contrast to the nonpreemptive model which we have considered in this paper...that there exists no schedule of length d. The 2-relaxed decision procedure is as follows. Put each job into the queue of the slowest machine Mk such...in their queues . If a machine’s queue is empty it takes jobs to process from the queue of the first machine that is slower than it and that has a
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.
LIBRARY INFORMATION PROCESSING USING AN ON-LINE, REAL-TIME COMPUTER SYSTEM.
ERIC Educational Resources Information Center
HOLZBAUR, FREDERICK W.; FARRIS, EUGENE H.
DIRECT MAN-MACHINE COMMUNICATION IS NOW POSSIBLE THROUGH ON-LINE, REAL-TIME TYPEWRITER TERMINALS DIRECTLY CONNECTED TO COMPUTERS. THESE TERMINAL SYSTEMS PERMIT THE OPERATOR, WHETHER ORDER CLERK, CATALOGER, REFERENCE LIBRARIAN OR TYPIST, TO INTERACT WITH THE COMPUTER IN MANIPULATING DATA STORED WITHIN IT. THE IBM ADMINISTRATIVE TERMINAL SYSTEM…
Dependency Structures for Statistical Machine Translation
ERIC Educational Resources Information Center
Bach, Nguyen
2012-01-01
Dependency structures represent a sentence as a set of dependency relations. Normally the dependency structures from a tree connect all the words in a sentence. One of the most defining characters of dependency structures is the ability to bring long distance dependency between words to local dependency structures. Another the main attraction of…
ERIC Educational Resources Information Center
Tenopir, Carol
2004-01-01
With wireless connectivity and small laptop computers, people are no longer tied to the desktop for online searching. Handheld personal digital assistants (PDAs) offer even greater portability. So far, the most common uses of PDAs are as calendars and address books, or to interface with a laptop or desktop machine. More advanced PDAs, like…
30 CFR 18.48 - Circuit-interrupting devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Construction and.... Such a switch shall be designed to prevent electrical connection to the machine frame when the cable is... motor in the event the belt is stopped, or abnormally slowed down. Note: Short transfer-type conveyors...
Computer Security Primer: Systems Architecture, Special Ontology and Cloud Virtual Machines
ERIC Educational Resources Information Center
Waguespack, Leslie J.
2014-01-01
With the increasing proliferation of multitasking and Internet-connected devices, security has reemerged as a fundamental design concern in information systems. The shift of IS curricula toward a largely organizational perspective of security leaves little room for focus on its foundation in systems architecture, the computational underpinnings of…
Soft Computing Methods for Disulfide Connectivity Prediction.
Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S
2015-01-01
The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.
NASA Technical Reports Server (NTRS)
Corker, Kevin M.; Pisanich, Gregory M.; Lebacqz, Victor (Technical Monitor)
1996-01-01
The Man-Machine Interaction Design and Analysis System (MIDAS) has been under development for the past ten years through a joint US Army and NASA cooperative agreement. MIDAS represents multiple human operators and selected perceptual, cognitive, and physical functions of those operators as they interact with simulated systems. MIDAS has been used as an integrated predictive framework for the investigation of human/machine systems, particularly in situations with high demands on the operators. Specific examples include: nuclear power plant crew simulation, military helicopter flight crew response, and police force emergency dispatch. In recent applications to airborne systems development, MIDAS has demonstrated an ability to predict flight crew decision-making and procedural behavior when interacting with automated flight management systems and Air Traffic Control. In this paper we describe two enhancements to MIDAS. The first involves the addition of working memory in the form of an articulatory buffer for verbal communication protocols and a visuo-spatial buffer for communications via digital datalink. The second enhancement is a representation of multiple operators working as a team. This enhanced model was used to predict the performance of human flight crews and their level of compliance with commercial aviation communication procedures. We show how the data produced by MIDAS compares with flight crew performance data from full mission simulations. Finally, we discuss the use of these features to study communications issues connected with aircraft-based separation assurance.
Machine learning for the assessment of Alzheimer's disease through DTI
NASA Astrophysics Data System (ADS)
Lella, Eufemia; Amoroso, Nicola; Bellotti, Roberto; Diacono, Domenico; La Rocca, Marianna; Maggipinto, Tommaso; Monaco, Alfonso; Tangaro, Sabina
2017-09-01
Digital imaging techniques have found several medical applications in the development of computer aided detection systems, especially in neuroimaging. Recent advances in Diffusion Tensor Imaging (DTI) aim to discover biological markers for the early diagnosis of Alzheimer's disease (AD), one of the most widespread neurodegenerative disorders. We explore here how different supervised classification models provide a robust support to the diagnosis of AD patients. We use DTI measures, assessing the structural integrity of white matter (WM) fiber tracts, to reveal patterns of disrupted brain connectivity. In particular, we provide a voxel-wise measure of fractional anisotropy (FA) and mean diffusivity (MD), thus identifying the regions of the brain mostly affected by neurodegeneration, and then computing intensity features to feed supervised classification algorithms. In particular, we evaluate the accuracy of discrimination of AD patients from healthy controls (HC) with a dataset of 80 subjects (40 HC, 40 AD), from the Alzheimer's Disease Neurodegenerative Initiative (ADNI). In this study, we compare three state-of-the-art classification models: Random Forests, Naive Bayes and Support Vector Machines (SVMs). We use a repeated five-fold cross validation framework with nested feature selection to perform a fair comparison between these algorithms and evaluate the information content they provide. Results show that AD patterns are well localized within the brain, thus DTI features can support the AD diagnosis.
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.
Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris
2018-03-01
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Salgotra, Aprajita; Pan, Somnath
2018-05-01
This paper explores a two-level control strategy by blending local controller with centralized controller for the low frequency oscillations in a power system. The proposed control scheme provides stabilization of local modes using a local controller and minimizes the effect of inter-connection of sub-systems performance through a centralized control. For designing the local controllers in the form of proportional-integral power system stabilizer (PI-PSS), a simple and straight forward frequency domain direct synthesis method is considered that works on use of a suitable reference model which is based on the desired requirements. Several examples both on one machine infinite bus and multi-machine systems taken from the literature are illustrated to show the efficacy of the proposed PI-PSS. The effective damping of the systems is found to be increased remarkably which is reflected in the time-responses; even unstable operation has been stabilized with improved damping after applying the proposed controller. The proposed controllers give remarkable improvement in damping the oscillations in all the illustrations considered here and as for example, the value of damping factor has been increased from 0.0217 to 0.666 in Example 1. The simulation results obtained by the proposed control strategy are favourably compared with some controllers prevalent in the literature. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
High efficiency machining technology and equipment for edge chamfer of KDP crystals
NASA Astrophysics Data System (ADS)
Chen, Dongsheng; Wang, Baorui; Chen, Jihong
2016-10-01
Potassium dihydrogen phosphate (KDP) is a type of nonlinear optical crystal material. To Inhibit the transverse stimulated Raman scattering of laser beam and then enhance the optical performance of the optics, the edges of the large-sized KDP crystal needs to be removed to form chamfered faces with high surface quality (RMS<5 nm). However, as the depth of cut (DOC) of fly cutting is usually several, its machining efficiency is too low to be accepted for chamfering of the KDP crystal as the amount of materials to be removed is in the order of millimeter. This paper proposes a novel hybrid machining method, which combines precision grinding with fly cutting, for crackless and high efficiency chamfer of KDP crystal. A specialized machine tool, which adopts aerostatic bearing linear slide and aerostatic bearing spindle, was developed for chamfer of the KDP crystal. The aerostatic bearing linear slide consists of an aerostatic bearing guide with linearity of 0.1 μm/100mm and a linear motor to achieve linear feeding with high precision and high dynamic performance. The vertical spindle consists of an aerostatic bearing spindle with the rotation accuracy (axial) of 0.05 microns and Fork type flexible connection precision driving mechanism. The machining experiment on flying and grinding was carried out, the optimize machining parameters was gained by a series of experiment. Surface roughness of 2.4 nm has been obtained. The machining efficiency can be improved by six times using the combined method to produce the same machined surface quality.
FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold D. Anderson, John T. Williams
2009-07-01
The existing control system for the RITS-6, a 20-MA 3-MV pulsed-power accelerator located at Sandia National Laboratories, was built as a system of analog switches because the operators needed to be close enough to the machine to hear pulsed-power breakdowns, yet the electromagnetic pulse (EMP) emitted would disable any processor-based solutions. The resulting system requires operators to activate and deactivate a series of 110-V relays manually in a complex order. The machine is sensitive to both the order of operation and the time taken between steps. A mistake in either case would cause a misfire and possible machine damage. Basedmore » on these constraints, a field-programmable gate array (FPGA) was chosen as the core of a proposed upgrade to the control system. An FPGA is a series of logic elements connected during programming. Based on their connections, the elements can mimic primitive logic elements, a process called synthesis. The circuit is static; all paths exist simultaneously and do not depend on a processor. This should make it less sensitive to EMP. By shielding it and using good electromagnetic interference-reduction practices, it should continue to operate well in the electrically noisy environment. The FPGA has two advantages over the existing system. In manual operation mode, the synthesized logic gates keep the operators in sequence. In addition, a clock signal and synthesized countdown circuit provides an automated sequence, with adjustable delays, for quickly executing the time-critical portions of charging and firing. The FPGA is modeled as a set of states, each state being a unique set of values for the output signals. The state is determined by the input signals, and in the automated segment by the value of the synthesized countdown timer, with the default mode placing the system in a safe configuration. Unlike a processor-based system, any system stimulus that results in an abort situation immediately executes a shutdown, with only a tens-of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.« less
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining
NASA Astrophysics Data System (ADS)
Kong, Xiangcheng
The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.
A Review on High-Speed Machining of Titanium Alloys
NASA Astrophysics Data System (ADS)
Rahman, Mustafizur; Wang, Zhi-Gang; Wong, Yoke-San
Titanium alloys have been widely used in the aerospace, biomedical and automotive industries because of their good strength-to-weight ratio and superior corrosion resistance. However, it is very difficult to machine them due to their poor machinability. When machining titanium alloys with conventional tools, the tool wear rate progresses rapidly, and it is generally difficult to achieve a cutting speed of over 60m/min. Other types of tool materials, including ceramic, diamond, and cubic boron nitride (CBN), are highly reactive with titanium alloys at higher temperature. However, binder-less CBN (BCBN) tools, which do not have any binder, sintering agent or catalyst, have a remarkably longer tool life than conventional CBN inserts even at high cutting speeds. In order to get deeper understanding of high speed machining (HSM) of titanium alloys, the generation of mathematical models is essential. The models are also needed to predict the machining parameters for HSM. This paper aims to give an overview of recent developments in machining and HSM of titanium alloys, geometrical modeling of HSM, and cutting force models for HSM of titanium alloys.
Gas Classification Using Deep Convolutional Neural Networks.
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-08
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).
Gas Classification Using Deep Convolutional Neural Networks
Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin
2018-01-01
In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723
Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita
2017-08-13
Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
FY17 ISCR Scholar End-of-Assignment Report - Robbie Sadre
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadre, R.
2017-10-20
Throughout this internship assignment, I did various tasks that contributed towards the starting of the SASEDS (Safe Active Scanning for Energy Delivery Systems) and CES-21 (California Energy Systems for the 21st Century) projects in the SKYFALL laboratory. The goal of the SKYFALL laboratory is to perform modeling and simulation verification of transmission power system devices, while integrating with high-performance computing. The first thing I needed to do was acquire official Online LabVIEW training from National Instruments. Through these online tutorial modules, I learned the basics of LabVIEW, gaining experience in connecting to NI devices through the DAQmx API as wellmore » as LabVIEW basic programming techniques (structures, loops, state machines, front panel GUI design etc).« less
Automated Cough Assessment on a Mobile Platform
2014-01-01
The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
Parallel eigenanalysis of finite element models in a completely connected architecture
NASA Technical Reports Server (NTRS)
Akl, F. A.; Morel, M. R.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.
An integrated network of Arabidopsis growth regulators and its use for gene prioritization.
Sabaghian, Ehsan; Drebert, Zuzanna; Inzé, Dirk; Saeys, Yvan
2015-12-01
Elucidating the molecular mechanisms that govern plant growth has been an important topic in plant research, and current advances in large-scale data generation call for computational tools that efficiently combine these different data sources to generate novel hypotheses. In this work, we present a novel, integrated network that combines multiple large-scale data sources to characterize growth regulatory genes in Arabidopsis, one of the main plant model organisms. The contributions of this work are twofold: first, we characterized a set of carefully selected growth regulators with respect to their connectivity patterns in the integrated network, and, subsequently, we explored to which extent these connectivity patterns can be used to suggest new growth regulators. Using a large-scale comparative study, we designed new supervised machine learning methods to prioritize growth regulators. Our results show that these methods significantly improve current state-of-the-art prioritization techniques, and are able to suggest meaningful new growth regulators. In addition, the integrated network is made available to the scientific community, providing a rich data source that will be useful for many biological processes, not necessarily restricted to plant growth.
Wang, Zhi-Long; Zhou, Zhi-Guo; Chen, Ying; Li, Xiao-Ting; Sun, Ying-Shi
The aim of this study was to diagnose lymph node metastasis of esophageal cancer by support vector machines model based on computed tomography. A total of 131 esophageal cancer patients with preoperative chemotherapy and radical surgery were included. Various indicators (tumor thickness, tumor length, tumor CT value, total number of lymph nodes, and long axis and short axis sizes of largest lymph node) on CT images before and after neoadjuvant chemotherapy were recorded. A support vector machines model based on these CT indicators was built to predict lymph node metastasis. Support vector machines model diagnosed lymph node metastasis better than preoperative short axis size of largest lymph node on CT. The area under the receiver operating characteristic curves were 0.887 and 0.705, respectively. The support vector machine model of CT images can help diagnose lymph node metastasis in esophageal cancer with preoperative chemotherapy.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Overview and extensions of a system for routing directed graphs on SIMD architectures
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl
1988-01-01
Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.
The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networksmore » and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.« less
MOD-2 wind turbine farm stability study
NASA Technical Reports Server (NTRS)
Hinrichsen, E. N.
1980-01-01
The dynamics of single and multiple 2.5 ME, Boeing MOD-2 wind turbine generators (WTGs) connected to utility power systems were investigated. The analysis was based on digital simulation. Both time response and frequency response methods were used. The dynamics of this type of WTG are characterized by two torsional modes, a low frequency 'shaft' mode below 1 Hz and an 'electrical' mode at 3-5 Hz. High turbine inertia and low torsional stiffness between turbine and generator are inherent features. Turbine control is based on electrical power, not turbine speed as in conventional utility turbine generators. Multi-machine dynamics differ very little from single machine dynamics.
El-Sayed, Hesham; Sankar, Sharmi; Daraghmi, Yousef-Awwad; Tiwari, Prayag; Rattagan, Ekarat; Mohanty, Manoranjan; Puthal, Deepak; Prasad, Mukesh
2018-05-24
Heterogeneous vehicular networks (HETVNETs) evolve from vehicular ad hoc networks (VANETs), which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs). The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS) improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM) kernels with a radial basis function (RBF). The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.
Electron beam lithographic modeling assisted by artificial intelligence technology
NASA Astrophysics Data System (ADS)
Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi
2017-07-01
We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.
The Effect of Friction in Pulleys on the Tension in Cables and Strings
NASA Astrophysics Data System (ADS)
Martell, Eric C.; Martell, Verda Beth
2013-02-01
Atwood's machine is used in countless introductory physics classes as an illustration of Newton's second law. Initially, the analysis is performed assuming the pulley and string are massless and the axle is frictionless. Although the mass of the pulley is often included when the problem is revisited later in the context of rotational dynamics, the mass of the string and the friction associated with the axle are less frequently discussed. Two questions then arise: 1) If we are ignoring these effects, how realistic is our model? and 2) How can we determine when or if we need to incorporate these effects in order to make our model match up with reality? These questions are connected to fundamental issues faced by physics teachers, namely the frustration students sometimes feel when they do not see how they can use the results of the problems they have been working on and how we can help our students develop effective models for physical systems.
Rating Movies and Rating the Raters Who Rate Them
Zhou, Hua; Lange, Kenneth
2010-01-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data. PMID:20802818
Rating Movies and Rating the Raters Who Rate Them.
Zhou, Hua; Lange, Kenneth
2009-11-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
Runtime Verification of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2008-01-01
We present in this paper a framework, RMOR, for monitoring the execution of C programs against state machines, expressed in a textual (nongraphical) format in files separate from the program. The state machine language has been inspired by a graphical state machine language RCAT recently developed at the Jet Propulsion Laboratory, as an alternative to using Linear Temporal Logic (LTL) for requirements capture. Transitions between states are labeled with abstract event names and Boolean expressions over such. The abstract events are connected to code fragments using an aspect-oriented pointcut language similar to ASPECTJ's or ASPECTC's pointcut language. The system is implemented in the C analysis and transformation package CIL, and is programmed in OCAML, the implementation language of CIL. The work is closely related to the notion of stateful aspects within aspect-oriented programming, where pointcut languages are extended with temporal assertions over the execution trace.
Applications of high power lasers. [using reflection holograms for machining and surface treatment
NASA Technical Reports Server (NTRS)
Angus, J. C.
1979-01-01
The use of computer generated, reflection holograms in conjunction with high power lasers for precision machining of metals and ceramics was investigated. The Reflection holograms which were developed and made to work at both optical wavelength (He-Ne, 6328 A) and infrared (CO2, 10.6) meet the primary practical requirement of ruggedness and are relatively economical and simple to fabricate. The technology is sufficiently advanced now so that reflection holography could indeed be used as a practical manufacturing device in certain applications requiring low power densities. However, the present holograms are energy inefficient and much of the laser power is lost in the zero order spot and higher diffraction orders. Improvements of laser machining over conventional methods are discussed and addition applications are listed. Possible uses in the electronics industry include drilling holes in printed circuit boards making soldered connections, and resistor trimming.
Energy landscapes for a machine-learning prediction of patient discharge
NASA Astrophysics Data System (ADS)
Das, Ritankar; Wales, David J.
2016-06-01
The energy landscapes framework is applied to a configuration space generated by training the parameters of a neural network. In this study the input data consists of time series for a collection of vital signs monitored for hospital patients, and the outcomes are patient discharge or continued hospitalisation. Using machine learning as a predictive diagnostic tool to identify patterns in large quantities of electronic health record data in real time is a very attractive approach for supporting clinical decisions, which have the potential to improve patient outcomes and reduce waiting times for discharge. Here we report some preliminary analysis to show how machine learning might be applied. In particular, we visualize the fitting landscape in terms of locally optimal neural networks and the connections between them in parameter space. We anticipate that these results, and analogues of thermodynamic properties for molecular systems, may help in the future design of improved predictive tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agazzone, U.; Ausiello, F.P.
1981-06-23
A power-generating installation comprises a plurality of modular power plants each comprised of an internal combustion engine connected to an electric machine. The electric machine is used to start the engine and thereafter operates as a generator supplying power to an electrical network common to all the modular plants. The installation has a control and protection system comprising a plurality of control modules each associated with a respective plant, and a central unit passing control signals to the modules to control starting and stopping of the individual power plants. Upon the detection of abnormal operation or failure of its associatedmore » power plant, each control module transmits an alarm signal back to the central unit which thereupon stops, or prevents the starting, of the corresponding power plant. Parameters monitored by each control module include generated current and inter-winding leakage current of the electric machine.« less
Graph theory for feature extraction and classification: a migraine pathology case study.
Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya
2014-01-01
Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Active vibration and balance system for closed cycle thermodynamic machines
NASA Technical Reports Server (NTRS)
Augenblick, John E. (Inventor); Peterson, Allen A. (Inventor); White, Maurice A. (Inventor); Qiu, Songgang (Inventor)
2004-01-01
An active balance system is provided for counterbalancing vibrations of an axially reciprocating machine. The balance system includes a support member, a flexure assembly, a counterbalance mass, and a linear motor or an actuator. The support member is configured for attachment to the machine. The flexure assembly includes at least one flat spring having connections along a central portion and an outer peripheral portion. One of the central portion and the outer peripheral portion is fixedly mounted to the support member. The counterbalance mass is fixedly carried by the flexure assembly along another of the central portion and the outer peripheral portion. The linear motor has one of a stator and a mover fixedly mounted to the support member and another of the stator and the mover fixedly mounted to the counterbalance mass. The linear motor is operative to axially reciprocate the counterbalance mass. A method is also provided.
Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril
2017-01-01
Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903
New numerical approach for the modelling of machining applied to aeronautical structural parts
NASA Astrophysics Data System (ADS)
Rambaud, Pierrick; Mocellin, Katia
2018-05-01
The manufacturing of aluminium alloy structural aerospace parts involves several steps: forming (rolling, forging …etc), heat treatments and machining. Before machining, the manufacturing processes have embedded residual stresses into the workpiece. The final geometry is obtained during this last step, when up to 90% of the raw material volume is removed by machining. During this operation, the mechanical equilibrium of the part is in constant evolution due to the redistribution of the initial stresses. This redistribution is the main cause for workpiece deflections during machining and for distortions - after unclamping. Both may lead to non-conformity of the part regarding the geometrical and dimensional specifications and therefore to rejection of the part or additional conforming steps. In order to improve the machining accuracy and the robustness of the process, the effect of the residual stresses has to be considered for the definition of the machining process plan and even in the geometrical definition of the part. In this paper, the authors present two new numerical approaches concerning the modelling of machining of aeronautical structural parts. The first deals with the use of an immersed volume framework to model the cutting step, improving the robustness and the quality of the resulting mesh compared to the previous version. The second is about the mechanical modelling of the machining problem. The authors thus show that in the framework of rolled aluminium parts the use of a linear elasticity model is functional in the finite element formulation and promising regarding the reduction of computation times.
Open Architecture Data System for NASA Langley Combined Loads Test System
NASA Technical Reports Server (NTRS)
Lightfoot, Michael C.; Ambur, Damodar R.
1998-01-01
The Combined Loads Test System (COLTS) is a new structures test complex that is being developed at NASA Langley Research Center (LaRC) to test large curved panels and cylindrical shell structures. These structural components are representative of aircraft fuselage sections of subsonic and supersonic transport aircraft and cryogenic tank structures of reusable launch vehicles. Test structures are subjected to combined loading conditions that simulate realistic flight load conditions. The facility consists of two pressure-box test machines and one combined loads test machine. Each test machine possesses a unique set of requirements or research data acquisition and real-time data display. Given the complex nature of the mechanical and thermal loads to be applied to the various research test articles, each data system has been designed with connectivity attributes that support both data acquisition and data management functions. This paper addresses the research driven data acquisition requirements for each test machine and demonstrates how an open architecture data system design not only meets those needs but provides robust data sharing between data systems including the various control systems which apply spectra of mechanical and thermal loading profiles.
Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer
2015-01-01
Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/. PMID:25928885
Integration Telegram Bot on E-Complaint Applications in College
NASA Astrophysics Data System (ADS)
Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.
2018-01-01
Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.
Installation of the Ignitor Machine at the Caorso Site
NASA Astrophysics Data System (ADS)
Migliori, S.; Pierattini, S.; Bombarda, F.; Faelli, G.; Zucchetti, M.; Coppi, B.
2008-11-01
The actual cost of building a new experiment can be considerably contained if infrastructures are already available on its envisioned site. The facilities of the Caorso site (near Piacenza, Italy) that, at present, houses a spent nuclear power station, have been analyzed in view of their utilization for the operation of the Ignitor machine. The main feature of the site is its robust connection to the electrical national power grid that can take the disturbance caused by Ignitor discharges with the highest magnetic fields and plasma currents, avoiding the need for rotating flywheels generators. Other assets include a vast building that can be modified to house the machine core and the associated diagnostic systems. A layout of the Ignitor plant, including the tritium laboratory and other service areas, the distribution of the components of the electrical power supply system and of the He gas cooling sytem are presented. Relevant safety issues have been analyzed, based on the in depth activation analysis of the machine components carried out by means of the FISPAC code. Waste management and environment impact issues, including risk to the population assessments, have also been addressed.
Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network
NASA Astrophysics Data System (ADS)
Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang
Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
Neural networks with fuzzy Petri nets for modeling a machining process
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.
1998-03-01
The paper presents an intelligent architecture based a feedforward neural network with fuzzy Petri nets for modeling product quality in a CNC machining center. It discusses how the proposed architecture can be used for modeling, monitoring and control a product quality specification such as surface roughness. The surface roughness represents the output quality specification manufactured by a CNC machining center as a result of a milling process. The neural network approach employed the selected input parameters which defined by the machine operator via the CNC code. The fuzzy Petri nets approach utilized the exact input milling parameters, such as spindle speed, feed rate, tool diameter and coolant (off/on), which can be obtained via the machine or sensors system. An aim of the proposed architecture is to model the demanded quality of surface roughness as high, medium or low.
10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.
Code of Federal Regulations, 2010 CFR
2010-01-01
... beverage vending machines. 431.292 Section 431.292 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY... Vending Machines § 431.292 Definitions concerning refrigerated bottled or canned beverage vending machines. Basic model means, with respect to refrigerated bottled or canned beverage vending machines, all units...
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
26 CFR 48.4061(b)-3 - Rebuilt, reconditioned, or repaired parts or accessories.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., Tires, Tubes, Tread Rubber, and Taxable Fuel Automotive and Related Items § 48.4061(b)-3 Rebuilt... batteries, (2) rebabbited or machined connecting rods, (3) reassembled clutches after operations such as the... reassembling (with any necessary replacements of worn parts) of automobile parts or accessories, such as fuel...
Variable-Displacement Hydraulic Drive Unit
NASA Technical Reports Server (NTRS)
Lang, D. J.; Linton, D. J.; Markunas, A.
1986-01-01
Hydraulic power controlled through multiple feedback loops. In hydraulic drive unit, power closely matched to demand, thereby saving energy. Hydraulic flow to and from motor adjusted by motor-control valve connected to wobbler. Wobbler angle determines motor-control-valve position, which in turn determines motor displacement. Concept applicable to machine tools, aircraft controls, and marine controls.
29 CFR 1910.254 - Arc welding and cutting.
Code of Federal Regulations, 2013 CFR
2013-07-01
... rated load with rated temperature rises where the temperature of the cooling air does not exceed 40 °C... work; magnetic work clamps shall be freed from adherent metal particles of spatter on contact surfaces... given to safety ground connections of portable machines. (4) Leaks. There shall be no leaks of cooling...
29 CFR 1910.254 - Arc welding and cutting.
Code of Federal Regulations, 2012 CFR
2012-07-01
... rated load with rated temperature rises where the temperature of the cooling air does not exceed 40 °C... work; magnetic work clamps shall be freed from adherent metal particles of spatter on contact surfaces... given to safety ground connections of portable machines. (4) Leaks. There shall be no leaks of cooling...
29 CFR 1910.254 - Arc welding and cutting.
Code of Federal Regulations, 2014 CFR
2014-07-01
... rated load with rated temperature rises where the temperature of the cooling air does not exceed 40 °C... work; magnetic work clamps shall be freed from adherent metal particles of spatter on contact surfaces... given to safety ground connections of portable machines. (4) Leaks. There shall be no leaks of cooling...
The IBM PC as an Online Search Machine. Part 5: Searching through Crosstalk.
ERIC Educational Resources Information Center
Kolner, Stuart J.
1985-01-01
This last of a five-part series on using the IBM personal computer for online searching highlights a brief review, search process, making the connection, switching between screens and modes, online transaction, capture buffer controls, coping with options, function keys, script files, processing downloaded information, note to TELEX users, and…
Recursive feature elimination for biomarker discovery in resting-state functional connectivity.
Ravishankar, Hariharan; Madhavan, Radhika; Mullick, Rakesh; Shetty, Teena; Marinelli, Luca; Joel, Suresh E
2016-08-01
Biomarker discovery involves finding correlations between features and clinical symptoms to aid clinical decision. This task is especially difficult in resting state functional magnetic resonance imaging (rs-fMRI) data due to low SNR, high-dimensionality of images, inter-subject and intra-subject variability and small numbers of subjects compared to the number of derived features. Traditional univariate analysis suffers from the problem of multiple comparisons. Here, we adopt an alternative data-driven method for identifying population differences in functional connectivity. We propose a machine-learning approach to down-select functional connectivity features associated with symptom severity in mild traumatic brain injury (mTBI). Using this approach, we identified functional regions with altered connectivity in mTBI. including the executive control, visual and precuneus networks. We compared functional connections at multiple resolutions to determine which scale would be more sensitive to changes related to patient recovery. These modular network-level features can be used as diagnostic tools for predicting disease severity and recovery profiles.
Electrooptic polymer voltage sensor and method of manufacture thereof
NASA Technical Reports Server (NTRS)
Gottsche, Allan (Inventor); Perry, Joseph W. (Inventor)
1993-01-01
An optical voltage sensor utilizing an electrooptic polymer is disclosed for application to electric power distribution systems. The sensor, which can be manufactured at low cost in accordance with a disclosed method, measures voltages across a greater range than prior art sensors. The electrooptic polymer, which replaces the optical crystal used in prior art sensors, is sandwiched directly between two high voltage electrodes. Voltage is measured by fiber optical means, and no voltage division is required. The sample of electrooptic polymer is fabricated in a special mold and later mounted in a sensor housing. Alternatively, mold and sensor housing may be identical. The sensor housing is made out of a machinable polymeric material and is equipped with two opposing optical windows. The optical windows are mounted in the bottom of machined holes in the wall of the mold. These holes provide for mounting of the polarizing optical components and for mounting of the fiber optic connectors. One connecting fiber is equipped with a light emitting diode as a light source. Another connecting fiber is equipped with a photodiode as a detector.
[Quality control of laser imagers].
Winkelbauer, F; Ammann, M; Gerstner, N; Imhof, H
1992-11-01
Multiformat imagers based on laser systems are used for documentation in an increasing number of investigations. The specific problems of quality control are explained and the persistence of film processing in these imager systems of different configuration with (Machine 1: 3M-Laser-Imager-Plus M952 with connected 3M Film-Processor, 3M-Film IRB, X-Rax Chemical Mixer 3M-XPM, 3M-Developer and Fixer) or without (Machine 2: 3M-Laser-Imager-Plus M952 with separate DuPont-Cronex Film-processor, Kodak IR-Film, Kodak Automixer, Kodak-Developer and Fixer) connected film processing unit are investigated. In our checking based on DIN 6868 and ONORM S 5240 we found persistence of film processing in the equipment with directly adapted film processing unit according to DIN and ONORM. The checking of film persistence as demanded by DIN 6868 in these equipment could therefore be performed in longer periods. Systems with conventional darkroom processing comparatively show plain increased fluctuation, and hence the demanded daily control is essential to guarantee appropriate reaction and constant quality of documentation.
High-end Home Firewalls CIAC-2326
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orvis, W
Networking in most large organizations is protected with corporate firewalls and managed by seasoned security professionals. Attempts to break into systems at these organizations are extremely difficult to impossible for an external intruder. With the growth in networking and the options that it makes possible, new avenues of intrusion are opening up. Corporate machines exist that are completely unprotected against intrusions, that are not managed by a security professional, and that are regularly connected to the company network. People have the option of and are encouraged to work at home using a home computer linked to the company network. Managersmore » have home computers linked to internal machines so they can keep an eye on internal processes while not physically at work. Researchers do research or writing at home and connect to the company network to download information and upload results. In most cases, these home computers are completely unprotected, except for any protection that the home user might have installed. Unfortunately, most home users are not security professionals and home computers are often used by other family members, such as children downloading music, who are completely unconcerned about security precautions. When these computers are connected to the company network, they can easily introduce viruses, worms, and other malicious code or open a channel behind the company firewall for an external intruder.« less
Wearable health monitoring using capacitive voltage-mode Human Body Communication.
Maity, Shovan; Das, Debayan; Sen, Shreyas
2017-07-01
Rapid miniaturization and cost reduction of computing, along with the availability of wearable and implantable physiological sensors have led to the growth of human Body Area Network (BAN) formed by a network of such sensors and computing devices. One promising application of such a network is wearable health monitoring where the collected data from the sensors would be transmitted and analyzed to assess the health of a person. Typically, the devices in a BAN are connected through wireless (WBAN), which suffers from energy inefficiency due to the high-energy consumption of wireless transmission. Human Body Communication (HBC) uses the relatively low loss human body as the communication medium to connect these devices, promising order(s) of magnitude better energy-efficiency and built-in security compared to WBAN. In this paper, we demonstrate a health monitoring device and system built using Commercial-Off-The-Shelf (COTS) sensors and components, that can collect data from physiological sensors and transmit it through a) intra-body HBC to another device (hub) worn on the body or b) upload health data through HBC-based human-machine interaction to an HBC capable machine. The system design constraints and signal transfer characteristics for the implemented HBC-based wearable health monitoring system are measured and analyzed, showing reliable connectivity with >8× power savings compared to Bluetooth low-energy (BTLE).
Kim, Dong Wook; Kim, Hwiyoung; Nam, Woong; Kim, Hyung Jun; Cha, In-Ho
2018-04-23
The aim of this study was to build and validate five types of machine learning models that can predict the occurrence of BRONJ associated with dental extraction in patients taking bisphosphonates for the management of osteoporosis. A retrospective review of the medical records was conducted to obtain cases and controls for the study. Total 125 patients consisting of 41 cases and 84 controls were selected for the study. Five machine learning prediction algorithms including multivariable logistic regression model, decision tree, support vector machine, artificial neural network, and random forest were implemented. The outputs of these models were compared with each other and also with conventional methods, such as serum CTX level. Area under the receiver operating characteristic (ROC) curve (AUC) was used to compare the results. The performance of machine learning models was significantly superior to conventional statistical methods and single predictors. The random forest model yielded the best performance (AUC = 0.973), followed by artificial neural network (AUC = 0.915), support vector machine (AUC = 0.882), logistic regression (AUC = 0.844), decision tree (AUC = 0.821), drug holiday alone (AUC = 0.810), and CTX level alone (AUC = 0.630). Machine learning methods showed superior performance in predicting BRONJ associated with dental extraction compared to conventional statistical methods using drug holiday and serum CTX level. Machine learning can thus be applied in a wide range of clinical studies. Copyright © 2017. Published by Elsevier Inc.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.