Machine learning for epigenetics and future medical applications.
Holder, Lawrence B; Haque, M Muksitul; Skinner, Michael K
2017-07-03
Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review.
Machine learning for epigenetics and future medical applications
Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.
2017-01-01
ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769
Correct machine learning on protein sequences: a peer-reviewing perspective.
Walsh, Ian; Pollastri, Gianluca; Tosatto, Silvio C E
2016-09-01
Machine learning methods are becoming increasingly popular to predict protein features from sequences. Machine learning in bioinformatics can be powerful but carries also the risk of introducing unexpected biases, which may lead to an overestimation of the performance. This article espouses a set of guidelines to allow both peer reviewers and authors to avoid common machine learning pitfalls. Understanding biology is necessary to produce useful data sets, which have to be large and diverse. Separating the training and test process is imperative to avoid over-selling method performance, which is also dependent on several hidden parameters. A novel predictor has always to be compared with several existing methods, including simple baseline strategies. Using the presented guidelines will help nonspecialists to appreciate the critical issues in machine learning. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tresser, Shachar; Dolev, Amit; Bucher, Izhak
2018-02-01
High-speed machinery is often designed to pass several "critical speeds", where vibration levels can be very high. To reduce vibrations, rotors usually undergo a mass balancing process, where the machine is rotated at its full speed range, during which the dynamic response near critical speeds can be measured. High sensitivity, which is required for a successful balancing process, is achieved near the critical speeds, where a single deflection mode shape becomes dominant, and is excited by the projection of the imbalance on it. The requirement to rotate the machine at high speeds is an obstacle in many cases, where it is impossible to perform measurements at high speeds, due to harsh conditions such as high temperatures and inaccessibility (e.g., jet engines). This paper proposes a novel balancing method of flexible rotors, which does not require the machine to be rotated at high speeds. With this method, the rotor is spun at low speeds, while subjecting it to a set of externally controlled forces. The external forces comprise a set of tuned, response dependent, parametric excitations, and nonlinear stiffness terms. The parametric excitation can isolate any desired mode, while keeping the response directly linked to the imbalance. A software controlled nonlinear stiffness term limits the response, hence preventing the rotor to become unstable. These forces warrant sufficient sensitivity required to detect the projection of the imbalance on any desired mode without rotating the machine at high speeds. Analytical, numerical and experimental results are shown to validate and demonstrate the method.
The phaco machine: analysing new technology.
Fishkind, William J
2013-01-01
The phaco machine is frequently overlooked as the crucial surgical instrument it is. Understanding how to set parameters is initiated by understanding fundamental concepts of machine function. This study analyses the critical concepts of partial occlusion phaco, occlusion phaco and pump technology. In addition, phaco energy categories as well as variations of phaco energy production are explored. Contemporary power modulations and pump controls allow for the enhancement of partial occlusion phacoemulsification. These significant changes in the anterior chamber dynamics produce a balanced environment for phaco; less complications; and improved patient outcomes.
Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data
Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.
2015-01-01
Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647
Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.
Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S
2016-02-01
Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.
Industrial machine systems risk assessment: a critical review of concepts and methods.
Etherton, John R
2007-02-01
Reducing the risk of work-related death and injury to machine operators and maintenance personnel poses a continuing occupational safety challenge. The risk of injury from machinery in U.S. workplaces is high. Between 1992 and 2001, there were, on average, 520 fatalities per year involving machines and, on average, 3.8 cases per 10,000 workers of nonfatal caught-in-running-machine injuries involving lost workdays. A U.S. task group recently developed a technical reference guideline, ANSI B11 TR3, "A Guide to Estimate, Evaluate, & Reduce Risks Associated with Machine Tools," that is intended to bring machine tool risk assessment practice in the United States up to or above the level now required by the international standard, ISO 14121. The ANSI guideline emphasizes identifying tasks and hazards not previously considered, particularly those associated with maintenance; and it further emphasizes teamwork among line workers, engineers, and safety professionals. The value of this critical review of concepts and methods resides in (1) its linking current risk theory to machine system risk assessment and (2) its exploration of how various risk estimation tools translate into risk-informed decisions on industrial machine system design and use. The review was undertaken to set the stage for a field evaluation study on machine risk assessment among users of the ANSI B11 TR3 method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roger Lew; Ronald L. Boring; Thomas A. Ulrich
Operators of critical processes, such as nuclear power production, must contend with highly complex systems, procedures, and regulations. Developing human-machine interfaces (HMIs) that better support operators is a high priority for ensuring the safe and reliable operation of critical processes. Human factors engineering (HFE) provides a rich and mature set of tools for evaluating the performance of HMIs, but the set of tools for developing and designing HMIs is still in its infancy. Here we propose that Microsoft Windows Presentation Foundation (WPF) is well suited for many roles in the research and development of HMIs for process control.
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Machine Learning Approaches for Clinical Psychology and Psychiatry.
Dwyer, Dominic B; Falkai, Peter; Koutsouleris, Nikolaos
2018-05-07
Machine learning approaches for clinical psychology and psychiatry explicitly focus on learning statistical functions from multidimensional data sets to make generalizable predictions about individuals. The goal of this review is to provide an accessible understanding of why this approach is important for future practice given its potential to augment decisions associated with the diagnosis, prognosis, and treatment of people suffering from mental illness using clinical and biological data. To this end, the limitations of current statistical paradigms in mental health research are critiqued, and an introduction is provided to critical machine learning methods used in clinical studies. A selective literature review is then presented aiming to reinforce the usefulness of machine learning methods and provide evidence of their potential. In the context of promising initial results, the current limitations of machine learning approaches are addressed, and considerations for future clinical translation are outlined.
Liu, Nehemiah T; Holcomb, John B; Wade, Charles E; Batchinsky, Andriy I; Cancio, Leopoldo C; Darrah, Mark I; Salinas, José
2014-02-01
Accurate and effective diagnosis of actual injury severity can be problematic in trauma patients. Inherent physiologic compensatory mechanisms may prevent accurate diagnosis and mask true severity in many circumstances. The objective of this project was the development and validation of a multiparameter machine learning algorithm and system capable of predicting the need for life-saving interventions (LSIs) in trauma patients. Statistics based on means, slopes, and maxima of various vital sign measurements corresponding to 79 trauma patient records generated over 110,000 feature sets, which were used to develop, train, and implement the system. Comparisons among several machine learning models proved that a multilayer perceptron would best implement the algorithm in a hybrid system consisting of a machine learning component and basic detection rules. Additionally, 295,994 feature sets from 82 h of trauma patient data showed that the system can obtain 89.8 % accuracy within 5 min of recorded LSIs. Use of machine learning technologies combined with basic detection rules provides a potential approach for accurately assessing the need for LSIs in trauma patients. The performance of this system demonstrates that machine learning technology can be implemented in a real-time fashion and potentially used in a critical care environment.
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-08
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms.
Zhang, Ying; Wang, Jun; Hao, Guan
2018-01-01
With the development of autonomous unmanned intelligent systems, such as the unmanned boats, unmanned planes and autonomous underwater vehicles, studies on Wireless Sensor-Actor Networks (WSANs) have attracted more attention. Network connectivity algorithms play an important role in data exchange, collaborative detection and information fusion. Due to the harsh application environment, abnormal nodes often appear, and the network connectivity will be prone to be lost. Network self-healing mechanisms have become critical for these systems. In order to decrease the movement overhead of the sensor-actor nodes, an autonomous connectivity restoration algorithm based on finite state machine is proposed. The idea is to identify whether a node is a critical node by using a finite state machine, and update the connected dominating set in a timely way. If an abnormal node is a critical node, the nearest non-critical node will be relocated to replace the abnormal node. In the case of multiple node abnormality, a regional network restoration algorithm is introduced. It is designed to reduce the overhead of node movements while restoration happens. Simulation results indicate the proposed algorithm has better performance on the total moving distance and the number of total relocated nodes compared with some other representative restoration algorithms. PMID:29316702
Training Moldmakers for Industry.
ERIC Educational Resources Information Center
Allyn, Edward P.
1978-01-01
In 1974, in response to the critical shortage of trained moldmakers, Berkshire Community College (Massachusetts) developed the first two-year college plastic moldmaking and design associate degree curriculum in the United States. The program focuses on the problems encountered in interpreting blueprints and machine set-up instructions in industry.…
Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era
NASA Astrophysics Data System (ADS)
Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse
2018-01-01
Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples produced by machines are typically more complete. These findings set the stage for further investigations, with the ultimate goal of efficiently and accurately labeling the wide range of data classes that will arise from the planned large astronomical surveys.
Automatic MeSH term assignment and quality assessment.
Kim, W.; Aronson, A. R.; Wilbur, W. J.
2001-01-01
For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203
Beam Loss Monitoring for LHC Machine Protection
NASA Astrophysics Data System (ADS)
Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos
The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.
Big Data: Next-Generation Machines for Big Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hack, James J.; Papka, Michael E.
Addressing the scientific grand challenges identified by the US Department of Energy’s (DOE’s) Office of Science’s programs alone demands a total leadership-class computing capability of 150 to 400 Pflops by the end of this decade. The successors to three of the DOE’s most powerful leadership-class machines are set to arrive in 2017 and 2018—the products of the Collaboration Oak Ridge Argonne Livermore (CORAL) initiative, a national laboratory–industry design/build approach to engineering nextgeneration petascale computers for grand challenge science. These mission-critical machines will enable discoveries in key scientific fields such as energy, biotechnology, nanotechnology, materials science, and high-performance computing, and servemore » as a milestone on the path to deploying exascale computing capabilities.« less
Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek
2014-01-01
Brain-Machine Interfaces (BMIs) can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily living (ADL). For BMIs to be seamlessly used in ADL, these issues should be handled with minimal external input thus reducing the need for a technician/caregiver to calibrate the system. Reinforcement Learning (RL) based BMIs are a good tool to be used when there is no external training signal and can provide an adaptive modality to train BMI decoders. However, RL based BMIs are sensitive to the feedback provided to adapt the BMI. In actor-critic BMIs, this feedback is provided by the critic and the overall system performance is limited by the critic accuracy. In this work, we developed an adaptive BMI that could handle inaccuracies in the critic feedback in an effort to produce more accurate RL based BMIs. We developed a confidence measure, which indicated how appropriate the feedback is for updating the decoding parameters of the actor. The results show that with the new update formulation, the critic accuracy is no longer a limiting factor for the overall performance. We tested and validated the system onthree different data sets: synthetic data generated by an Izhikevich neural spiking model, synthetic data with a Gaussian noise distribution, and data collected from a non-human primate engaged in a reaching task. All results indicated that the system with the critic confidence built in always outperformed the system without the critic confidence. Results of this study suggest the potential application of the technique in developing an autonomous BMI that does not need an external signal for training or extensive calibration. PMID:24904257
Cheng, Tiejun; Li, Qingliang; Wang, Yanli; Bryant, Stephen H
2011-02-28
Aqueous solubility is recognized as a critical parameter in both the early- and late-stage drug discovery. Therefore, in silico modeling of solubility has attracted extensive interests in recent years. Most previous studies have been limited in using relatively small data sets with limited diversity, which in turn limits the predictability of derived models. In this work, we present a support vector machines model for the binary classification of solubility by taking advantage of the largest known public data set that contains over 46 000 compounds with experimental solubility. Our model was optimized in combination with a reduction and recombination feature selection strategy. The best model demonstrated robust performance in both cross-validation and prediction of two independent test sets, indicating it could be a practical tool to select soluble compounds for screening, purchasing, and synthesizing. Moreover, our work may be used for comparative evaluation of solubility classification studies ascribe to the use of completely public resources.
Intrusion detection using rough set classification.
Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai
2004-09-01
Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).
NASA Technical Reports Server (NTRS)
Prater, T.; Tilson, W.; Jones, Z.
2015-01-01
The absence of an economy of scale in spaceflight hardware makes additive manufacturing an immensely attractive option for propulsion components. As additive manufacturing techniques are increasingly adopted by government and industry to produce propulsion hardware in human-rated systems, significant development efforts are needed to establish these methods as reliable alternatives to conventional subtractive manufacturing. One of the critical challenges facing powder bed fusion techniques in this application is variability between machines used to perform builds. Even with implementation of robust process controls, it is possible for two machines operating at identical parameters with equivalent base materials to produce specimens with slightly different material properties. The machine variability study presented here evaluates 60 specimens of identical geometry built using the same parameters. 30 samples were produced on machine 1 (M1) and the other 30 samples were built on machine 2 (M2). Each of the 30-sample sets were further subdivided into three subsets (with 10 specimens in each subset) to assess the effect of progressive heat treatment on machine variability. The three categories for post-processing were: stress relief, stress relief followed by hot isostatic press (HIP), and stress relief followed by HIP followed by heat treatment per AMS 5664. Each specimen (a round, smooth tensile) was mechanically tested per ASTM E8. Two formal statistical techniques, hypothesis testing for equivalency of means and one-way analysis of variance (ANOVA), were applied to characterize the impact of machine variability and heat treatment on six material properties: tensile stress, yield stress, modulus of elasticity, fracture elongation, and reduction of area. This work represents the type of development effort that is critical as NASA, academia, and the industrial base work collaboratively to establish a path to certification for additively manufactured parts. For future flight programs, NASA and its commercial partners will procure parts from vendors who will use a diverse range of machines to produce parts and, as such, it is essential that the AM community develop a sound understanding of the degree to which machine variability impacts material properties.
NASA Technical Reports Server (NTRS)
Lundquist, Eugene E; Schwartz, Edward B
1942-01-01
The results of a theoretical and experimental investigation to determine the critical compression load for a universal testing machine are presented for specimens loaded through knife edges. The critical load for the testing machine is the load at which one of the loading heads becomes laterally instable in relation to the other. For very short specimens the critical load was found to be less than the rated capacity given by the manufacturer for the machine. A load-length diagram is proposed for defining the safe limits of the test region for the machine. Although this report is particularly concerned with a universal testing machine of a certain type, the basic theory which led to the derivation of the general equation for the critical load, P (sub cr) = alpha L can be applied to any testing machine operated in compression where the specimen is loaded through knife edges. In this equation, L is the length of the specimen between knife edges and alpha is the force necessary to displace the upper end of the specimen unit horizontal distance relative to the lower end of the specimen in a direction normal to the knife edges through which the specimen is loaded.
Dynamic remedial action scheme using online transient stability analysis
NASA Astrophysics Data System (ADS)
Shrestha, Arun
Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system configuration and operating state. The generation-shedding cost is calculated using pre-RAS and post-RAS OPF costs. The criteria for selecting generators to trip is based on the minimum cost rather than minimum amount of generation to shed. For an unstable Category C contingency, the RAS control action that results in stable system with minimum generation shedding cost is selected among possible candidate solutions. The RAS control actions update whenever there is a change in operating condition, system configuration, or cost functions. The effectiveness of the proposed technique is demonstrated by simulations on the IEEE 9-bus system, the IEEE 39-bus system, and IEEE 145-bus system. This dissertation also proposes an improved, yet relatively simple, technique for solving Transient Stability-Constrained Optimal Power Flow (TSC-OPF) problem. Using the SIME method, the sets of dynamic and transient stability constraints are reduced to a single stability constraint, decreasing the overall size of the optimization problem. The transient stability constraint is formulated using the critical machines' power at the initial time step, rather than using the machine rotor angles. This avoids the addition of machine steady state stator algebraic equations in the conventional OPF algorithm. A systematic approach to reach an optimal solution is developed by exploring the quasi-linear behavior of critical machine power and stability margin. The proposed method shifts critical machines active power based on generator costs using an OPF algorithm. Moreover, the transient stability limit is based on stability margin, and not on a heuristically set limit on OMIB rotor angle. As a result, the proposed TSC-OPF solution is more economical and transparent. The proposed technique enables the use of fast and robust commercial OPF tool and time-domain simulation software for solving large scale TSC-OPF problem, which makes the proposed method also suitable for real-time application.
Automatic detection of tweets reporting cases of influenza like illnesses in Australia
2015-01-01
Early detection of disease outbreaks is critical for disease spread control and management. In this work we investigate the suitability of statistical machine learning approaches to automatically detect Twitter messages (tweets) that are likely to report cases of possible influenza like illnesses (ILI). Empirical results obtained on a large set of tweets originating from the state of Victoria, Australia, in a 3.5 month period show evidence that machine learning classifiers are effective in identifying tweets that mention possible cases of ILI (up to 0.736 F-measure, i.e. the harmonic mean of precision and recall), regardless of the specific technique implemented by the classifier investigated in the study. PMID:25870759
Evaluation of the eZono 4000 with eZGuide for ultrasound-guided procedures.
Gadsden, Jeff; Latmore, Malikah; Levine, Daniel M
2015-05-01
Ultrasound-guided procedures are increasingly common in a variety of acute care settings, such as the operating room, critical care unit and emergency room. However, accurate judgment of needle tip position using traditional ultrasound technology is frequently difficult, and serious injury can result from inadvertently advancing beyond or through the target. Needle navigation is a recent innovation that allows the clinician to visualize the needle position and trajectory in real time as it approaches the target. A novel ultrasound machine has recently been introduced that is portable and designed for procedural guidance. The eZono 4000™ features an innovative needle navigation technology that is simple to use and permits the use of a wide range of commercially available needles, avoiding the inconvenience and cost of proprietary equipment. This article discusses this new ultrasound machine in the context of other currently available ultrasound machines featuring needle navigation.
NASA Technical Reports Server (NTRS)
Friedrich, Craig R.; Warrington, Robert O.
1995-01-01
Micromechanical machining processes are those micro fabrication techniques which directly remove work piece material by either a physical cutting tool or an energy process. These processes are direct and therefore they can help reduce the cost and time for prototype development of micro mechanical components and systems. This is especially true for aerospace applications where size and weight are critical, and reliability and the operating environment are an integral part of the design and development process. The micromechanical machining processes are rapidly being recognized as a complementary set of tools to traditional lithographic processes (such as LIGA) for the fabrication of micromechanical components. Worldwide efforts in the U.S., Germany, and Japan are leading to results which sometimes rival lithography at a fraction of the time and cost. Efforts to develop processes and systems specific to aerospace applications are well underway.
NASA Astrophysics Data System (ADS)
Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.
2016-05-01
Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.
Kim, Yong-Ku; Na, Kyoung-Sae
2018-01-03
Mood disorders are a highly prevalent group of mental disorders causing substantial socioeconomic burden. There are various methodological approaches for identifying the underlying mechanisms of the etiology, symptomatology, and therapeutics of mood disorders; however, neuroimaging studies have provided the most direct evidence for mood disorder neural substrates by visualizing the brains of living individuals. The prefrontal cortex, hippocampus, amygdala, thalamus, ventral striatum, and corpus callosum are associated with depression and bipolar disorder. Identifying the distinct and common contributions of these anatomical regions to depression and bipolar disorder have broadened and deepened our understanding of mood disorders. However, the extent to which neuroimaging research findings contribute to clinical practice in the real-world setting is unclear. As traditional or non-machine learning MRI studies have analyzed group-level differences, it is not possible to directly translate findings from research to clinical practice; the knowledge gained pertains to the disorder, but not to individuals. On the other hand, a machine learning approach makes it possible to provide individual-level classifications. For the past two decades, many studies have reported on the classification accuracy of machine learning-based neuroimaging studies from the perspective of diagnosis and treatment response. However, for the application of a machine learning-based brain MRI approach in real world clinical settings, several major issues should be considered. Secondary changes due to illness duration and medication, clinical subtypes and heterogeneity, comorbidities, and cost-effectiveness restrict the generalization of the current machine learning findings. Sophisticated classification of clinical and diagnostic subtypes is needed. Additionally, as the approach is inevitably limited by sample size, multi-site participation and data-sharing are needed in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Finite element analysis of drilling in carbon fiber reinforced polymer composites
NASA Astrophysics Data System (ADS)
Phadnis, V. A.; Roy, A.; Silberschmidt, V. V.
2012-08-01
Carbon fiber reinforced polymer composite (CFRP) laminates are attractive for many applications in the aerospace industry especially as aircraft structural components due to their superior properties. Usually drilling is an important final machining process for components made of composite laminates. In drilling of CFRP, it is an imperative task to determine the maximum critical thrust forces that trigger inter-laminar and intra-laminar damage modes owing to highly anisotropic fibrous media; and negotiate integrity of composite structures. In this paper, a 3D finite element (FE) model of drilling in CFRP composite laminate is developed, which accurately takes into account the dynamic characteristics involved in the process along with the accurate geometrical considerations. A user defined material model is developed to account for accurate though thickness response of composite laminates. The average critical thrust forces and torques obtained using FE analysis, for a set of machining parameters are found to be in good agreement with the experimental results from literature.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
A journey from nuclear criticality methods to high energy density radflow experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbatsch, Todd James
Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy, but they sure are fun.« less
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Effect of Bearings on Vibration in Rotating Machinery
NASA Astrophysics Data System (ADS)
Daniel, Rudrapati Victor; Amit Siddhappa, Savale; Bhushan Gajanan, Savale; Vipin Philip, S.; Paul, P. Sam
2017-08-01
In rotary machines vibration is an inherent phenomenon which has the tendency to affect required performance. Amongst the different parameters that affect vibration, selection of appropriate bearing is the most critical component. In this work the effect of different types of bearing on vibration in rotary machines is studied and the magnitude of vibration produced by use of different set of bearings under the same condition of loads and rotational speeds were investigated. Bearings considered in this work were ball bearing, tapered roller bearing, thrust bearing and shaft material considered is of mild steel. From experimental result, it was noted that tapered roller bearing gives the highest amplitude of vibration among all the three bearings whereas the ball bearing gives least amplitude under similar operating conditions.
Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K
2018-01-01
Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.
Imbalanced learning for pattern recognition: an empirical study
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Man, Hong; Desai, Sachi; Quoraishee, Shafik
2010-10-01
The imbalanced learning problem (learning from imbalanced data) presents a significant new challenge to the pattern recognition and machine learning society because in most instances real-world data is imbalanced. When considering military applications, the imbalanced learning problem becomes much more critical because such skewed distributions normally carry the most interesting and critical information. This critical information is necessary to support the decision-making process in battlefield scenarios, such as anomaly or intrusion detection. The fundamental issue with imbalanced learning is the ability of imbalanced data to compromise the performance of standard learning algorithms, which assume balanced class distributions or equal misclassification penalty costs. Therefore, when presented with complex imbalanced data sets these algorithms may not be able to properly represent the distributive characteristics of the data. In this paper we present an empirical study of several popular imbalanced learning algorithms on an army relevant data set. Specifically we will conduct various experiments with SMOTE (Synthetic Minority Over-Sampling Technique), ADASYN (Adaptive Synthetic Sampling), SMOTEBoost (Synthetic Minority Over-Sampling in Boosting), and AdaCost (Misclassification Cost-Sensitive Boosting method) schemes. Detailed experimental settings and simulation results are presented in this work, and a brief discussion of future research opportunities/challenges is also presented.
NASA Astrophysics Data System (ADS)
Ravi, A. M.; Murigendrappa, S. M.
2018-04-01
In recent times, thermally enhanced machining (TEM) slowly gearing up to cut hard metals like high chrome white cast iron (HCWCI) which were impossible in conventional procedures. Also setting up of suitable cutting parameters and positioning of the heat source against the work appears to be critical in order to enhance the machinability characteristics of the work material. In this research work, the Oxy - LPG flame was used as the heat source and HCWCI as the workpiece. ANSYS-CFD-Flow software was used to develop the transient thermal model to analyze the thermal flux distribution on the work surface during TEM of HCWCI using Cubic boron nitride (CBN) tools. Non-contact type Infrared thermo sensor was used to measure the surface temperature continuously at different positions, and is validated with the thermal model results. The result confirms thermal model is a better predictive tool for thermal flux distribution analysis in TEM process.
Investigation of roughing machining simulation by using visual basic programming in NX CAM system
NASA Astrophysics Data System (ADS)
Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed
2018-03-01
This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.
Forsyth, Alexander W; Barzilay, Regina; Hughes, Kevin S; Lui, Dickson; Lorenz, Karl A; Enzinger, Andrea; Tulsky, James A; Lindvall, Charlotta
2018-06-01
Clinicians document cancer patients' symptoms in free-text format within electronic health record visit notes. Although symptoms are critically important to quality of life and often herald clinical status changes, computational methods to assess the trajectory of symptoms over time are woefully underdeveloped. To create machine learning algorithms capable of extracting patient-reported symptoms from free-text electronic health record notes. The data set included 103,564 sentences obtained from the electronic clinical notes of 2695 breast cancer patients receiving paclitaxel-containing chemotherapy at two academic cancer centers between May 1996 and May 2015. We manually annotated 10,000 sentences and trained a conditional random field model to predict words indicating an active symptom (positive label), absence of a symptom (negative label), or no symptom at all (neutral label). Sentences labeled by human coder were divided into training, validation, and test data sets. Final model performance was determined on 20% test data unused in model development or tuning. The final model achieved precision of 0.82, 0.86, and 0.99 and recall of 0.56, 0.69, and 1.00 for positive, negative, and neutral symptom labels, respectively. The most common positive symptoms were pain, fatigue, and nausea. Machine-based labeling of 103,564 sentences took two minutes. We demonstrate the potential of machine learning to gather, track, and analyze symptoms experienced by cancer patients during chemotherapy. Although our initial model requires further optimization to improve the performance, further model building may yield machine learning methods suitable to be deployed in routine clinical care, quality improvement, and research applications. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Pre-resistance-welding resistance check
Destefan, Dennis E.; Stompro, David A.
1991-01-01
A preweld resistance check for resistance welding machines uses an open circuited measurement to determine the welding machine resistance, a closed circuit measurement to determine the parallel resistance of a workpiece set and the machine, and a calculation to determine the resistance of the workpiece set. Any variation in workpiece set or machine resistance is an indication that the weld may be different from a control weld.
Cache Sharing and Isolation Tradeoffs in Multicore Mixed-Criticality Systems
2015-05-01
of lockdown registers, to provide way-based partitioning. These alternatives are illustrated in Fig. 1 with respect to a quad-core ARM Cortex A9...presented a cache-partitioning scheme that allows multiple tasks to share the same cache partition on a single processor (as we do for Level-A and...sets and determined the fraction that were schedulable on our target hardware platform, the quad-core ARM Cortex A9 machine mentioned earlier, the LLC
JPRS Report, East Asia, Southeast Asia
1988-05-03
Bayan Chief on Impact , Criticism of Efforts ’.’.’.’."’." 39 Election Official Says Postponement Geared To Defeat Rebel Candidates """ŕ"! 43...duplicating machines, 22 typewriters, three televi- sion sets, one video deck, four radio cassettes, two cameras, 23 elephants , 24 horses/mules, 1,451...daw (Navy) vessels are on constant patrol and 132 fish- poaching vessels, 111 black-market vessels and 1,409 fish poachers were seized between 21
Data Programming: Creating Large Training Sets, Quickly.
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2016-12-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
Data Programming: Creating Large Training Sets, Quickly
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2018-01-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252
NASA Astrophysics Data System (ADS)
Sembiring, N.; Nasution, A. H.
2018-02-01
Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.
Critical Speed of The Glass Glue Machine's Creep and Influence Factors Analysis
NASA Astrophysics Data System (ADS)
Yang, Jianxi; Huang, Jian; Wang, Liying; Shi, Jintai
When automatic glass glue machine works, two questions of the machine starting vibrating and stick-slip motion are existing. These problems should be solved. According to these questions, a glue machine's model for studying stick-slip is established. Based on the dynamics system describing of the model, mathematical expression is presented. The creep critical speed expression is constructed referring to existing research achievement and a new conclusion is found. The influencing factors of stiffness, dampness, mass, velocity, difference of static and kinetic coefficient of friction are analyzed through Matlab simulation. Research shows that reasonable choice of influence parameters can improve the creep phenomenon. These all supply the theory evidence for improving the machine's motion stability.
Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare.
Mozaffari-Kermani, Mehran; Sur-Kolay, Susmita; Raghunathan, Anand; Jha, Niraj K
2015-11-01
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.
Torp-Pedersen, Søren; Christensen, Robin; Szkudlarek, Marcin; Ellegaard, Karen; D'Agostino, Maria Antonietta; Iagnocco, Annamaria; Naredo, Esperanza; Balint, Peter; Wakefield, Richard J; Torp-Pedersen, Arendse; Terslev, Lene
2015-02-01
To determine how settings for power and color Doppler ultrasound sensitivity vary on different high- and intermediate-range ultrasound machines and to evaluate the impact of these changes on Doppler scoring of inflamed joints. Six different types of ultrasound machines were used. On each machine, the factory setting for superficial musculoskeletal scanning was used unchanged for both color and power Doppler modalities. The settings were then adjusted for increased Doppler sensitivity, and these settings were designated study settings. Eleven patients with rheumatoid arthritis (RA) with wrist involvement were scanned on the 6 machines, each with 4 settings, generating 264 Doppler images for scoring and color quantification. Doppler sensitivity was measured with a quantitative assessment of Doppler activity: color fraction. Higher color fraction indicated higher sensitivity. Power Doppler was more sensitive on half of the machines, whereas color Doppler was more sensitive on the other half, using both factory settings and study settings. There was an average increase in Doppler sensitivity, despite modality, of 78% when study settings were applied. Over the 6 machines, 2 Doppler modalities, and 2 settings, the grades for each of 7 of the patients varied between 0 and 3, while the grades for each of the other 4 patients varied between 0 and 2. The effect of using different machines, Doppler modalities, and settings has a considerable influence on the quantification of inflammation by ultrasound in RA patients, and this must be taken into account in multicenter studies. Copyright © 2015 by the American College of Rheumatology.
A journey from nuclear criticality methods to high energy density radflow experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbatsch, Todd James
Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy and they are as saturated with politics as a presidential election, but they sure are fun.« less
Fall classification by machine learning using mobile phones.
Albert, Mark V; Kording, Konrad; Herrmann, Megan; Jayaraman, Arun
2012-01-01
Fall prevention is a critical component of health care; falls are a common source of injury in the elderly and are associated with significant levels of mortality and morbidity. Automatically detecting falls can allow rapid response to potential emergencies; in addition, knowing the cause or manner of a fall can be beneficial for prevention studies or a more tailored emergency response. The purpose of this study is to demonstrate techniques to not only reliably detect a fall but also to automatically classify the type. We asked 15 subjects to simulate four different types of falls-left and right lateral, forward trips, and backward slips-while wearing mobile phones and previously validated, dedicated accelerometers. Nine subjects also wore the devices for ten days, to provide data for comparison with the simulated falls. We applied five machine learning classifiers to a large time-series feature set to detect falls. Support vector machines and regularized logistic regression were able to identify a fall with 98% accuracy and classify the type of fall with 99% accuracy. This work demonstrates how current machine learning approaches can simplify data collection for prevention in fall-related research as well as improve rapid response to potential injuries due to falls.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
ERIC Educational Resources Information Center
Anoka-Hennepin Technical Coll., Minneapolis, MN.
This set of two training outlines and one basic skills set list are designed for a machine tool technology program developed during a project to retrain defense industry workers at risk of job loss or dislocation because of conversion of the defense industry. The first troubleshooting training outline lists the categories of problems that develop…
Effect of Width of Kerf on Machining Accuracy and Subsurface Layer After WEDM
NASA Astrophysics Data System (ADS)
Mouralova, K.; Kovar, J.; Klakurkova, L.; Prokes, T.
2018-02-01
Wire electrical discharge machining is an unconventional machining technology that applies physical principles to material removal. The material is removed by a series of recurring current discharges between the workpiece and the tool electrode, and a `kerf' is created between the wire and the material being machined. The width of the kerf is directly dependent not only on the diameter of the wire used, but also on the machine parameter settings and, in particular, on the set of mechanical and physical properties of the material being machined. To ensure precise machining, it is important to have the width of the kerf as small as possible. The present study deals with the evaluation of the width of the kerf for four different metallic materials (some of which were subsequently heat treated using several methods) with different machine parameter settings. The kerf is investigated on metallographic cross sections using light and electron microscopy.
NASA Astrophysics Data System (ADS)
Samadhi, TMAA; Sumihartati, Atin
2016-02-01
The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..
NASA Astrophysics Data System (ADS)
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
Zhang, Bing; Schmoyer, Denise; Kirov, Stefan; Snoddy, Jay
2004-01-01
Background Microarray and other high-throughput technologies are producing large sets of interesting genes that are difficult to analyze directly. Bioinformatics tools are needed to interpret the functional information in the gene sets. Results We have created a web-based tool for data analysis and data visualization for sets of genes called GOTree Machine (GOTM). This tool was originally intended to analyze sets of co-regulated genes identified from microarray analysis but is adaptable for use with other gene sets from other high-throughput analyses. GOTree Machine generates a GOTree, a tree-like structure to navigate the Gene Ontology Directed Acyclic Graph for input gene sets. This system provides user friendly data navigation and visualization. Statistical analysis helps users to identify the most important Gene Ontology categories for the input gene sets and suggests biological areas that warrant further study. GOTree Machine is available online at . Conclusion GOTree Machine has a broad application in functional genomic, proteomic and other high-throughput methods that generate large sets of interesting genes; its primary purpose is to help users sort for interesting patterns in gene sets. PMID:14975175
Advances in Machine Technology.
Clark, William R; Villa, Gianluca; Neri, Mauro; Ronco, Claudio
2018-01-01
Continuous renal replacement therapy (CRRT) machines have evolved into devices specifically designed for critically ill over the past 40 years. In this chapter, a brief history of this evolution is first provided, with emphasis on the manner in which changes have been made to address the specific needs of the critically ill patient with acute kidney injury. Subsequently, specific examples of technology developments for CRRT machines are discussed, including the user interface, pumps, pressure monitoring, safety features, and anticoagulation capabilities. © 2018 S. Karger AG, Basel.
Diamond turning of Si and Ge single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blake, P.; Scattergood, R.O.
Single-point diamond turning studies have been completed on Si and Ge crystals. A new process model was developed for diamond turning which is based on a critical depth of cut for plastic flow-to-brittle fracture transitions. This concept, when combined with the actual machining geometry for single-point turning, predicts that {open_quotes}ductile{close_quotes} machining is a combined action of plasticity and fracture. Interrupted cutting experiments also provide a meant to directly measure the critical depth parameter for given machining conditions.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Speech emotion recognition methods: A literature review
NASA Astrophysics Data System (ADS)
Basharirad, Babak; Moradhaseli, Mohammadreza
2017-10-01
Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.
Aqueous cutting fluid for machining fissionable materials
Duerksen, Walter K.; Googin, John M.; Napier, Jr., Bradley
1984-01-01
The present invention is directed to a cutting fluid for machining fissionable material. The cutting fluid is formed of glycol, water and boron compound in an adequate concentration for effective neutron attenuation so as to inhibit criticality incidents during machining.
Engineered Surface Properties of Porous Tungsten from Cryogenic Machining
NASA Astrophysics Data System (ADS)
Schoop, Julius Malte
Porous tungsten is used to manufacture dispenser cathodes due to it refractory properties. Surface porosity is critical to functional performance of dispenser cathodes because it allows for an impregnated ceramic compound to migrate to the emitting surface, lowering its work function. Likewise, surface roughness is important because it is necessary to ensure uniform wetting of the molten impregnate during high temperature service. Current industry practice to achieve surface roughness and surface porosity requirements involves the use of a plastic infiltrant during machining. After machining, the infiltrant is baked and the cathode pellet is impregnated. In this context, cryogenic machining is investigated as a substitutionary process for the current plastic infiltration process. Along with significant reductions in cycle time and resource use, surface quality of cryogenically machined un-infiltrated (as-sintered) porous tungsten has been shown to significantly outperform dry machining. The present study is focused on examining the relationship between machining parameters and cooling condition on the as-machined surface integrity of porous tungsten. The effects of cryogenic pre-cooling, rake angle, cutting speed, depth of cut and feed are all taken into consideration with respect to machining-induced surface morphology. Cermet and Polycrystalline diamond (PCD) cutting tools are used to develop high performance cryogenic machining of porous tungsten. Dry and pre-heated machining were investigated as a means to allow for ductile mode machining, yet severe tool-wear and undesirable smearing limited the feasibility of these approaches. By using modified PCD cutting tools, high speed machining of porous tungsten at cutting speeds up to 400 m/min is achieved for the first time. Beyond a critical speed, brittle fracture and built-up edge are eliminated as the result of a brittle to ductile transition. A model of critical chip thickness ( hc ) effects based on cutting force, temperature and surface roughness data is developed and used to study the deformation mechanisms of porous tungsten under different machining conditions. It is found that when hmax = hc, ductile mode machining of otherwise highly brittle porous tungsten is possible. The value of hc is approximately the same as the average ligament size of the 80% density porous tungsten workpiece.
Orrù, Graziella; Pettersson-Yeo, William; Marquand, Andre F; Sartori, Giuseppe; Mechelli, Andrea
2012-04-01
Standard univariate analysis of neuroimaging data has revealed a host of neuroanatomical and functional differences between healthy individuals and patients suffering a wide range of neurological and psychiatric disorders. Significant only at group level however these findings have had limited clinical translation, and recent attention has turned toward alternative forms of analysis, including Support-Vector-Machine (SVM). A type of machine learning, SVM allows categorisation of an individual's previously unseen data into a predefined group using a classification algorithm, developed on a training data set. In recent years, SVM has been successfully applied in the context of disease diagnosis, transition prediction and treatment prognosis, using both structural and functional neuroimaging data. Here we provide a brief overview of the method and review those studies that applied it to the investigation of Alzheimer's disease, schizophrenia, major depression, bipolar disorder, presymptomatic Huntington's disease, Parkinson's disease and autistic spectrum disorder. We conclude by discussing the main theoretical and practical challenges associated with the implementation of this method into the clinic and possible future directions. Copyright © 2012 Elsevier Ltd. All rights reserved.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Machine learning applications in genetics and genomics.
Libbrecht, Maxwell W; Noble, William Stafford
2015-06-01
The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets. Here, we provide an overview of machine learning applications for the analysis of genome sequencing data sets, including the annotation of sequence elements and epigenetic, proteomic or metabolomic data. We present considerations and recurrent challenges in the application of supervised, semi-supervised and unsupervised machine learning methods, as well as of generative and discriminative modelling approaches. We provide general guidelines to assist in the selection of these machine learning methods and their practical application for the analysis of genetic and genomic data sets.
Janet, Jon Paul; Kulik, Heather J
2017-11-22
Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.
Classification of Variable Objects in Massive Sky Monitoring Surveys
NASA Astrophysics Data System (ADS)
Woźniak, Przemek; Wyrzykowski, Łukasz; Belokurov, Vasily
2012-03-01
The era of great sky surveys is upon us. Over the past decade we have seen rapid progress toward a continuous photometric record of the optical sky. Numerous sky surveys are discovering and monitoring variable objects by hundreds of thousands. Advances in detector, computing, and networking technology are driving applications of all shapes and sizes ranging from small all sky monitors, through networks of robotic telescopes of modest size, to big glass facilities equipped with giga-pixel CCD mosaics. The Large Synoptic Survey Telescope will be the first peta-scale astronomical survey [18]. It will expand the volume of the parameter space available to us by three orders of magnitude and explore the mutable heavens down to an unprecedented level of sensitivity. Proliferation of large, multidimensional astronomical data sets is stimulating the work on new methods and tools to handle the identification and classification challenge [3]. Given exponentially growing data rates, automated classification of variability types is quickly becoming a necessity. Taking humans out of the loop not only eliminates the subjective nature of visual classification, but is also an enabling factor for time-critical applications. Full automation is especially important for studies of explosive phenomena such as γ-ray bursts that require rapid follow-up observations before the event is over. While there is a general consensus that machine learning will provide a viable solution, the available algorithmic toolbox remains underutilized in astronomy by comparison with other fields such as genomics or market research. Part of the problem is the nature of astronomical data sets that tend to be dominated by a variety of irregularities. Not all algorithms can handle gracefully uneven time sampling, missing features, or sparsely populated high-dimensional spaces. More sophisticated algorithms and better tools available in standard software packages are required to facilitate the adoption of machine learning in astronomy. The goal of this chapter is to show a number of successful applications of state-of-the-art machine learning methodology to time-resolved astronomical data, illustrate what is possible today, and help identify areas for further research and development. After a brief comparison of the utility of various machine learning classifiers, the discussion focuses on support vector machines (SVM), neural nets, and self-organizing maps. Traditionally, to detect and classify transient variability astronomers used ad hoc scan statistics. These methods will remain important as feature extractors for input into generic machine learning algorithms. Experience shows that the performance of machine learning tools on astronomical data critically depends on the definition and quality of the input features, and that a considerable amount of preprocessing is required before standard algorithms can be applied. However, with continued investments of effort by a growing number of astro-informatics savvy computer scientists and astronomers the much-needed expertise and infrastructure are growing faster than ever.
Speed-Selector Guard For Machine Tool
NASA Technical Reports Server (NTRS)
Shakhshir, Roda J.; Valentine, Richard L.
1992-01-01
Simple guardplate prevents accidental reversal of direction of rotation or sudden change of speed of lathe, milling machine, or other machine tool. Custom-made for specific machine and control settings. Allows control lever to be placed at only one setting. Operator uses handle to slide guard to engage or disengage control lever. Protects personnel from injury and equipment from damage occurring if speed- or direction-control lever inadvertently placed in wrong position.
Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco
2018-03-01
This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.
Machining of Aircraft Titanium with Abrasive-Waterjets for Fatigue Critical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H. T.; Hovanski, Yuri; Dahl, Michael E.
2010-10-04
Laboratory tests were conducted to determine the fatigue performance of AWJ-machined aircraft titanium. Dog-bone specimens machined with AWJs were prepared and tested with and without sanding and dry-grit blasting with Al2O3 as secondary processes. The secondary processes were applied to remove the visual appearance of AWJ-generated striations and to clean up the garnet embedment. The fatigue performance of AWJ-machined specimens was compared with baseline specimens machined with CNC milling. Fatigue test results not only confirmed the findings of the aluminum dog-bone specimens but also further enhance the fatigue performance. In addition, titanium is known to be notoriously difficult to cutmore » with contact tools while AWJs cut it 34% faster than stainless steel. AWJ cutting and dry-grit blasting are shown to be a preferred combination for processing aircraft titanium that is fatigue critical.« less
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Machine characterization and benchmark performance prediction
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.
1988-01-01
From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.
Cyber-Attack Methods, Why They Work on Us, and What to Do
NASA Technical Reports Server (NTRS)
Byrne, DJ
2015-01-01
Basic cyber-attack methods are well documented, and even automated with user-friendly GUIs (Graphical User Interfaces). Entire suites of attack tools are legal, conveniently packaged, and freely downloadable to anyone; more polished versions are sold with vendor support. Our team ran some of these against a selected set of projects within our organization to understand what the attacks do so that we can design and validate defenses against them. Some existing defenses were effective against the attacks, some less so. On average, every machine had twelve easily identifiable vulnerabilities, two of them "critical". Roughly 5% of passwords in use were easily crack-able. We identified a clear set of recommendations for each project, and some common patterns that emerged among them all.
A Senior Project-Based Multiphase Motor Drive System Development
ERIC Educational Resources Information Center
Abdel-Khalik, Ayman S.; Massoud, Ahmed M.; Ahmed, Shehab
2016-01-01
Adjustable-speed drives based on multiphase motors are of significant interest for safety-critical applications that necessitate wide fault-tolerant capabilities and high system reliability. Although multiphase machines are based on the same conceptual theory as three-phase machines, most undergraduate electrical machines and electric drives…
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François
2016-04-01
From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.
NASA Astrophysics Data System (ADS)
Mishonov, Todor M.; Varonov, Albert M.; Maksimovski, Dejan D.; Manolev, Stojan G.; Gourev, Vassil N.; Yordanov, Vasil G.
2017-03-01
An experimental set-up for electrostatic measurement of {\\varepsilon }0, separate magnetostatic measurement of {μ }0 and determination of the speed of light c=1/\\sqrt{{\\varepsilon }0{μ }0} according to Maxwell’s theory with percent accuracy is described. No forces are measured with the experimental set-up, therefore there is no need for a scale, and the experiment cost of less than £20 is mainly due to the batteries used. Multiplied 137 times, this experimental set-up was given at the Fourth Open International Experimental Physics Olympiad (EPO4) and a dozen high school students performed successful experiments. The experimental set-up actually contains two different pendula for electric and magnetic measurements. In the magnetic experiment the pendulum is constituted by a magnetic coil attracted to a fixed one. In the electrostatic pendulum when the distance between the plates becomes shorter than a critical value the suspended plate catastrophically sticks to the fixed one, while in the magnetic pendulum the same occurs when the current in the coils becomes greater than a certain critical value. The basic idea of the methodology is to use the loss of stability as a tool for the determination of fundamental constants.
2015-01-08
RATANA MEEKHAM, AN ELECTRICAL INTEGRATION TECHNICIAN FOR QUALIS CORP. OF HUNTSVILLE, ALABAMA, HELPS TEST AVIONICS -- COMPLEX VEHICLE SYSTEMS ENABLING NAVIGATION, COMMUNICATIONS AND OTHER FUNCTIONS CRITICAL TO HUMAN SPACEFLIGHT -- FOR THE SPACE LAUNCH SYSTEM PROGRAM AT NASA’S MARSHALL SPACE FLIGHT CENTER IN HUNTSVILLE, ALABAMA. HER WORK SUPPORTS THE NASA ENGINEERING & SCIENCE SERVICES AND SKILLS AUGMENTATION CONTRACT LED BY JACOBS ENGINEERING OF HUNTSVILLE. MEEKHAM WORKS FULL-TIME AT MARSHALL WHILE FINISHING HER ASSOCIATE'S DEGREE IN MACHINE TOOL TECHNOLOGY AT CALHOUN COMMUNITY COLLEGE IN DECATUR, ALABAMA. THE SPACE LAUNCH SYSTEM, NASA’S NEXT HEAVY-LIFT LAUNCH VEHICLE, IS THE WORLD’S MOST POWERFUL ROCKET, SET TO FLY ITS FIRST UNCREWED LUNAR ORBITAL MISSION IN 2018. ITS FIRST.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
A machine learning approach for classification of anatomical coverage in CT
NASA Astrophysics Data System (ADS)
Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew
2016-03-01
Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.
Single bus star connected reluctance drive and method
Fahimi, Babak; Shamsi, Pourya
2016-05-10
A system and methods for operating a switched reluctance machine includes a controller, an inverter connected to the controller and to the switched reluctance machine, a hysteresis control connected to the controller and to the inverter, a set of sensors connected to the switched reluctance machine and to the controller, the switched reluctance machine further including a set of phases the controller further comprising a processor and a memory connected to the processor, wherein the processor programmed to execute a control process and a generation process.
Application of Abrasive-Waterjets for Machining Fatigue-Critical Aircraft Aluminum Parts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H T; Hovanski, Yuri; Dahl, Michael E
2010-08-19
Current specifications require AWJ-cut aluminum parts for fatigue critical aerospace structures to go through subsequent processing due to concerns of degradation in fatigue performance. The requirement of secondary process for AWJ-machined parts greatly negates the cost effectiveness of waterjet technology. Some cost savings are envisioned if it can be shown that AWJ net cut parts have comparable durability properties as those conventionally machined. To revisit and upgrade the specifications for AWJ machining of aircraft aluminum, “Dog-bone” specimens, with and without secondary processes, were prepared for independent fatigue tests at Boeing and Pacific Northwest National Laboratory (PNNL). Test results show thatmore » the fatigue life is proportional to quality levels of machined edges or inversely proportional to the surface roughness Ra . Even at highest quality level, the average fatigue life of AWJ-machined parts is about 30% shorter than those of conventionally machined counterparts. Between two secondary processes, dry-grit blasting with aluminum oxide abrasives until the striation is removed visually yields excellent result. It actually prolongs the fatigue life of parts at least three times higher than that achievable with conventional machining. Dry-grit blasting is relatively simple and inexpensive to administrate and, equally important, alleviates the concerns of garnet embedment.« less
Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X
NASA Astrophysics Data System (ADS)
Suryono, M. A. E.; Rosyidi, C. N.
2018-03-01
PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.
Li, Linglong; Yang, Yaodong; Zhang, Dawei; ...
2018-03-30
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
Recognizing sights, smells, and sounds with gnostic fields.
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of "gnostic" neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded.
Recognizing Sights, Smells, and Sounds with Gnostic Fields
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded. PMID:23365648
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Linglong; Yang, Yaodong; Zhang, Dawei
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
An active role for machine learning in drug development
Murphy, Robert F.
2014-01-01
Due to the complexity of biological systems, cutting-edge machine-learning methods will be critical for future drug development. In particular, machine-vision methods to extract detailed information from imaging assays and active-learning methods to guide experimentation will be required to overcome the dimensionality problem in drug development. PMID:21587249
NASA Astrophysics Data System (ADS)
Song, Z.; Guo, P.; Liu, Y.
2014-03-01
The influence of unbalanced magnetic pull (UMP) and hydraulic seal force on the vibration of large rotor-bearing systems is studied. The UMP caused by rotor eccentricity imposes important effects on rotating machinery, especially for large generators such as water turbine generator sets, because these machines operate above their first critical speed in some instances and are supported by oil film bearings. A magnetic stiffness matrix for studying the effects of the UMP is proposed. The magnetic stiffness matrix can be generated by decomposing the expression of air gap magnetic field energy. Two vibration models are constructed using the Lagrange equation. The difference between the two models lies in the boundary support condition: one has rigid support and the other has elastic bearing support. The influence of the magnetic stiffness and elastic support on the critical speed of the rotor is studied using Lyapunov nonlinear vibration stability theory. The vibration amplitude of the rotor is calculated, taking the magnetic stiffness and horizontal centrifugal force into account. The unbalanced hydraulic seal force is produced because of the asymmetry of seal clearance. This imbalance is one of the factors that causes self-excited vibration in rotating machinery, and is as important as the UMP for large water turbine generator sets. The rotor-bearing system is supported by an oil film journal bearing, whose characteristic also impose considerable influence on vibration. On the basis of the above-mentioned conditions, a three-dimensional finite element model of the rotating system that includes the oil film journal bearing is constructed. The effect of the UMP and unbalanced hydraulic seal force is considered in the construction, and studied in relation to the magnetic parameters, seal parameters, journal bearing stiffness, and outer diameter of the rotating machine critical speed. Conclusions may benefit the dynamic design and optimized operation of large rotating machinery.
Stirling machine operating experience
NASA Technical Reports Server (NTRS)
Ross, Brad; Dudenhoefer, James E.
1991-01-01
Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that Stirling machines are capable of reliable and lengthy lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and were not expected to operate for any lengthy period of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered.
Dynamic analysis and vibration testing of CFRP drive-line system used in heavy-duty machine tool
NASA Astrophysics Data System (ADS)
Yang, Mo; Gui, Lin; Hu, Yefa; Ding, Guoping; Song, Chunsheng
2018-03-01
Low critical rotary speed and large vibration in the metal drive-line system of heavy-duty machine tool affect the machining precision seriously. Replacing metal drive-line with the CFRP drive-line can effectively solve this problem. Based on the composite laminated theory and the transfer matrix method (TMM), this paper puts forward a modified TMM to analyze dynamic characteristics of CFRP drive-line system. With this modified TMM, the CFRP drive-line of a heavy vertical miller is analyzed. And the finite element modal analysis model of the shafting is established. The results of the modified TMM and finite element analysis (FEA) show that the modified TMM can effectively predict the critical rotary speed of CFRP drive-line. And the critical rotary speed of CFRP drive-line is 20% higher than that of the original metal drive-line. Then, the vibration of the CFRP and the metal drive-line were tested. The test results show that application of the CFRP drive shaft in the drive-line can effectively reduce the vibration of the heavy-duty machine tool.
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Tan, Chee-Heng; Teh, Ying-Wah
2013-08-01
The main obstacles in mass adoption of cloud computing for database operations in healthcare organization are the data security and privacy issues. In this paper, it is shown that IT services particularly in hardware performance evaluation in virtual machine can be accomplished effectively without IT personnel gaining access to actual data for diagnostic and remediation purposes. The proposed mechanisms utilized the hypothetical data from TPC-H benchmark, to achieve 2 objectives. First, the underlying hardware performance and consistency is monitored via a control system, which is constructed using TPC-H queries. Second, the mechanism to construct stress-testing scenario is envisaged in the host, using a single or combination of TPC-H queries, so that the resource threshold point can be verified, if the virtual machine is still capable of serving critical transactions at this constraining juncture. This threshold point uses server run queue size as input parameter, and it serves 2 purposes: It provides the boundary threshold to the control system, so that periodic learning of the synthetic data sets for performance evaluation does not reach the host's constraint level. Secondly, when the host undergoes hardware change, stress-testing scenarios are simulated in the host by loading up to this resource threshold level, for subsequent response time verification from real and critical transactions.
Machining of Aircraft Titanium with Abrasive-Waterjets for Fatigue Critical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H. T.; Hovanski, Yuri; Dahl, Michael E.
2012-02-01
Laboratory tests were conducted to determine the fatigue performance of abrasive-waterjet- (AWJ-) machined aircraft titanium. Dog-bone specimens machined with AWJs were prepared and tested with and without sanding and dry-grit blasting with Al2O3 as secondary processes. The secondary processes were applied to remove the visual appearance of AWJ-generated striations and to clean up the garnet embedment. The fatigue performance of AWJ-machined specimens was compared with baseline specimens machined with CNC milling. Fatigue test results of the titanium specimens not only confirmed our previous findings in aluminum dog-bone specimens but in comparison also further enhanced the fatigue performance of the titanium.more » In addition, titanium is known to be difficult to cut, particularly for thick parts, however AWJs cut the material 34% faster han stainless steel. AWJ cutting and dry-grit blasting are shown to be a preferred ombination for processing aircraft titanium that is fatigue critical.« less
Eitrich, T; Kless, A; Druska, C; Meyer, W; Grotendorst, J
2007-01-01
In this paper, we study the classifications of unbalanced data sets of drugs. As an example we chose a data set of 2D6 inhibitors of cytochrome P450. The human cytochrome P450 2D6 isoform plays a key role in the metabolism of many drugs in the preclinical drug discovery process. We have collected a data set from annotated public data and calculated physicochemical properties with chemoinformatics methods. On top of this data, we have built classifiers based on machine learning methods. Data sets with different class distributions lead to the effect that conventional machine learning methods are biased toward the larger class. To overcome this problem and to obtain sensitive but also accurate classifiers we combine machine learning and feature selection methods with techniques addressing the problem of unbalanced classification, such as oversampling and threshold moving. We have used our own implementation of a support vector machine algorithm as well as the maximum entropy method. Our feature selection is based on the unsupervised McCabe method. The classification results from our test set are compared structurally with compounds from the training set. We show that the applied algorithms enable the effective high throughput in silico classification of potential drug candidates.
Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.
2018-01-01
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331
Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J
2018-03-14
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.
Haque, M Muksitul; Holder, Lawrence B; Skinner, Michael K
2015-01-01
Environmentally induced epigenetic transgenerational inheritance of disease and phenotypic variation involves germline transmitted epimutations. The primary epimutations identified involve altered differential DNA methylation regions (DMRs). Different environmental toxicants have been shown to promote exposure (i.e., toxicant) specific signatures of germline epimutations. Analysis of genomic features associated with these epimutations identified low-density CpG regions (<3 CpG / 100bp) termed CpG deserts and a number of unique DNA sequence motifs. The rat genome was annotated for these and additional relevant features. The objective of the current study was to use a machine learning computational approach to predict all potential epimutations in the genome. A number of previously identified sperm epimutations were used as training sets. A novel machine learning approach using a sequential combination of Active Learning and Imbalance Class Learner analysis was developed. The transgenerational sperm epimutation analysis identified approximately 50K individual sites with a 1 kb mean size and 3,233 regions that had a minimum of three adjacent sites with a mean size of 3.5 kb. A select number of the most relevant genomic features were identified with the low density CpG deserts being a critical genomic feature of the features selected. A similar independent analysis with transgenerational somatic cell epimutation training sets identified a smaller number of 1,503 regions of genome-wide predicted sites and differences in genomic feature contributions. The predicted genome-wide germline (sperm) epimutations were found to be distinct from the predicted somatic cell epimutations. Validation of the genome-wide germline predicted sites used two recently identified transgenerational sperm epimutation signature sets from the pesticides dichlorodiphenyltrichloroethane (DDT) and methoxychlor (MXC) exposure lineage F3 generation. Analysis of this positive validation data set showed a 100% prediction accuracy for all the DDT-MXC sperm epimutations. Observations further elucidate the genomic features associated with transgenerational germline epimutations and identify a genome-wide set of potential epimutations that can be used to facilitate identification of epigenetic diagnostics for ancestral environmental exposures and disease susceptibility.
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
Research on criticality analysis method of CNC machine tools components under fault rate correlation
NASA Astrophysics Data System (ADS)
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Using Machine Learning to Predict MCNP Bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grechanuk, Pavel Aleksandrovi
For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less
Laser Machining of Melt Infiltrated Ceramic Matrix Composite
NASA Technical Reports Server (NTRS)
Jarmon, D. C.; Ojard, G.; Brewer, D.
2012-01-01
As interest grows in considering the use of ceramic matrix composites for critical components, the effects of different machining techniques, and the resulting machined surfaces, on strength need to be understood. This work presents the characterization of a Melt Infiltrated SiC/SiC composite material system machined by different methods. While a range of machining approaches were initially considered, only diamond grinding and laser machining were investigated on a series of tensile coupons. The coupons were tested for residual tensile strength, after a stressed steam exposure cycle. The data clearly differentiated the laser machined coupons as having better capability for the samples tested. These results, along with micro-structural characterization, will be presented.
ODISEES: A New Paradigm in Data Access
NASA Astrophysics Data System (ADS)
Huffer, E.; Little, M. M.; Kusterer, J.
2013-12-01
As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.
Walking robot: A design project for undergraduate students
NASA Technical Reports Server (NTRS)
1990-01-01
The design and construction of the University of Maryland walking machine was completed during the 1989 to 1990 academic year. It was required that the machine be capable of completing a number of tasks including walking a straight line, turning to change direction, and manuevering over an obstacle such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear box and crank arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating this machine about this support. The machine can be controlled by using either a user-operated remote tether or the onboard computer for the execution of control commands. Absolute encoders are attached to all motors to provide the control computer with information regarding the status of the motors. Long and short range infrared sensors provide the computer with feedback information regarding the machine's position relative to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Machine learning: novel bioinformatics approaches for combating antimicrobial resistance.
Macesic, Nenad; Polubriaginof, Fernanda; Tatonetti, Nicholas P
2017-12-01
Antimicrobial resistance (AMR) is a threat to global health and new approaches to combating AMR are needed. Use of machine learning in addressing AMR is in its infancy but has made promising steps. We reviewed the current literature on the use of machine learning for studying bacterial AMR. The advent of large-scale data sets provided by next-generation sequencing and electronic health records make applying machine learning to the study and treatment of AMR possible. To date, it has been used for antimicrobial susceptibility genotype/phenotype prediction, development of AMR clinical decision rules, novel antimicrobial agent discovery and antimicrobial therapy optimization. Application of machine learning to studying AMR is feasible but remains limited. Implementation of machine learning in clinical settings faces barriers to uptake with concerns regarding model interpretability and data quality.Future applications of machine learning to AMR are likely to be laboratory-based, such as antimicrobial susceptibility phenotype prediction.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning
Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.; ...
2017-06-05
Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less
Accurate prediction of X-ray pulse properties from a free-electron laser using machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez-Gonzalez, A.; Micaelli, P.; Olivier, C.
Free-electron lasers providing ultra-short high-brightness pulses of X-ray radiation have great potential for a wide impact on science, and are a critical element for unravelling the structural dynamics of matter. To fully harness this potential, we must accurately know the X-ray properties: intensity, spectrum and temporal profile. Owing to the inherent fluctuations in free-electron lasers, this mandates a full characterization of the properties for each and every pulse. While diagnostics of these properties exist, they are often invasive and many cannot operate at a high-repetition rate. Here, we present a technique for circumventing this limitation. Employing a machine learning strategy,more » we can accurately predict X-ray properties for every shot using only parameters that are easily recorded at high-repetition rate, by training a model on a small set of fully diagnosed pulses. Lastly, this opens the door to fully realizing the promise of next-generation high-repetition rate X-ray lasers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliprantis, Dionysios; El-Sharkawi, Mohamed; Muljadi, Eduard
The main objective of this special issue is to collect and disseminate publications that highlight recent advances and breakthroughs in the area of renewable energy resources. The use of these resources for production of electricity is increasing rapidly worldwide. As of 2015, a majority of countries have set renewable electricity targets in the 10%-40% range to be achieved by 2020-2030, with a few notable exceptions aiming for 100% generation by renewables. We are experiencing a truly unprecedented transition away from fossil fuels, driven by environmental, energy security, and socio-economic factors.Electric machines can be found in a wide range of renewablemore » energy applications, such as wind turbines, hydropower and hydrokinetic systems, flywheel energy storage devices, and low-power energy harvesting systems. Hence, the design of reliable, efficient, cost-effective, and controllable electric machines is crucial in enabling even higher penetrations of renewable energy systems in the smart grid of the future. In addition, power electronic converter design and control is critical, as they provide essential controllability, flexibility, grid interface, and integration functions.« less
Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio
2018-01-18
Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.
National Machine Guarding Program: Part 2. Safety management in small metal fabrication enterprises.
Parker, David L; Yamin, Samuel C; Brosseau, Lisa M; Xi, Min; Gordon, Robert; Most, Ivan G; Stanley, Rodney
2015-11-01
Small manufacturing businesses often lack important safety programs. Many reasons have been set forth on why this has remained a persistent problem. The National Machine Guarding Program (NMGP) was a nationwide intervention conducted in partnership with two workers' compensation insurers. Insurance safety consultants collected baseline data in 221 business using a 33-question safety management audit. Audits were completed during an interview with the business owner or manager. Most measures of safety management improved with an increasing number of employees. This trend was particularly strong for lockout/tagout. However, size was only significant for businesses without a safety committee. Establishments with a safety committee scored higher (55% vs. 36%) on the safety management audit compared with those lacking a committee (P < 0.0001). Critical safety management programs were frequently absent. A safety committee appears to be a more important factor than business size in accounting for differences in outcome measures. © 2015 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc.
Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter
2008-05-20
Machine learning techniques are often used for decision making in security critical applications such as intrusion detection and spam filtering...filter. The defenses shown in this thesis are able to work against the attacks developed against SpamBayes and are sufficiently generic to be easily extended into other statistical machine learning algorithms.
The DoD Manufacturing Technology Program Strategic Plan: Delivering Defense Affordability
2009-03-01
58%) engineering time savings required for critical spares for the M2 Machine Gun , widely used by U.S. and NATO forces. 12 Report to Congress on...Machine Gun used by U.S. and NATO ground and sea forces. This 1930s-era legacy weapon system continues to experience critical spare parts shortages due...Missiles and the Mid-Range-Munition. Durable Gun Barrel Materials–Composite Overwrap Process. Future Combat Systems (FCS) could not meet weight and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackley, W.S.; Scattergood, R.O.
A new research initiative will be undertaken to investigate the critical cutting depth concepts for single point diamond turning of brittle, amorphous materials. Inorganic glasses and a brittle, thermoset polymer (organic glass) are the principal candidate materials. Interrupted cutting tests similar to those done in earlier research are Ge and Si crystals will be made to obtain critical depth values as a function of machining parameters. The results will provide systematic data with which to assess machining performance on glasses and amorphous materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
Here, we apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models - the square and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-one Ising (BSI) model, and the 2D XY model, and examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow exploration of different phases and symmetry-breaking, but can distinguish phase transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which ismore » particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the 'charge' correlations (vorticity) in the BSI model (XY model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the 'antoencoder method', and demonstrate that it too can be trained to capture phase transitions and critical points.« less
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
2017-06-19
Here, we apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models - the square and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-one Ising (BSI) model, and the 2D XY model, and examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow exploration of different phases and symmetry-breaking, but can distinguish phase transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which ismore » particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the 'charge' correlations (vorticity) in the BSI model (XY model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the 'antoencoder method', and demonstrate that it too can be trained to capture phase transitions and critical points.« less
NASA Astrophysics Data System (ADS)
Hu, Wenjian; Singh, Rajiv R. P.; Scalettar, Richard T.
2017-06-01
We apply unsupervised machine learning techniques, mainly principal component analysis (PCA), to compare and contrast the phase behavior and phase transitions in several classical spin models—the square- and triangular-lattice Ising models, the Blume-Capel model, a highly degenerate biquadratic-exchange spin-1 Ising (BSI) model, and the two-dimensional X Y model—and we examine critically what machine learning is teaching us. We find that quantified principal components from PCA not only allow the exploration of different phases and symmetry-breaking, but they can distinguish phase-transition types and locate critical points. We show that the corresponding weight vectors have a clear physical interpretation, which is particularly interesting in the frustrated models such as the triangular antiferromagnet, where they can point to incipient orders. Unlike the other well-studied models, the properties of the BSI model are less well known. Using both PCA and conventional Monte Carlo analysis, we demonstrate that the BSI model shows an absence of phase transition and macroscopic ground-state degeneracy. The failure to capture the "charge" correlations (vorticity) in the BSI model (X Y model) from raw spin configurations points to some of the limitations of PCA. Finally, we employ a nonlinear unsupervised machine learning procedure, the "autoencoder method," and we demonstrate that it too can be trained to capture phase transitions and critical points.
T & I--Machine Shop. Kit No. 83. Instructor's Manual [and] Student Learning Activity Guide.
ERIC Educational Resources Information Center
White, Jim
An instructor's manual and student activity guide on the machine shop are provided in this set of prevocational education materials which focuses on the vocational area of trade and industry. (This set of materials is one of ninety-two prevocational education sets arranged around a cluster of seven vocational offerings: agriculture, home…
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-01-01
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-06-29
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.
Interpreting support vector machine models for multivariate group wise analysis in neuroimaging
Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos
2015-01-01
Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Kuan, Chihping; Zhang, YI
1991-01-01
A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
ChargeOut! : discounted cash flow compared with traditional machine-rate analysis
Ted Bilek
2008-01-01
ChargeOut!, a discounted cash-flow methodology in spreadsheet format for analyzing machine costs, is compared with traditional machine-rate methodologies. Four machine-rate models are compared and a common data set representative of logging skiddersâ costs is used to illustrate the differences between ChargeOut! and the machine-rate methods. The study found that the...
Articulated, Performance-Based Instruction Objectives Guide for Machine Shop Technology.
ERIC Educational Resources Information Center
Henderson, William Edward, Jr., Ed.
This articulation guide contains 21 units of instruction for two years of machine shop. The objectives of the program are to provide the student with the basic terminology and fundamental knowledge and skills in machining (year 1) and to teach him/her to set up and operate machine tools and make or repair metal parts, tools, and machines (year 2).…
29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).
Code of Federal Regulations, 2013 CFR
2013-07-01
..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...
29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).
Code of Federal Regulations, 2014 CFR
2014-07-01
..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...
29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).
Code of Federal Regulations, 2011 CFR
2011-07-01
..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...
29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).
Code of Federal Regulations, 2012 CFR
2012-07-01
..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...
Brown, Raymond J.
1977-01-01
The present invention relates to a tool setting device for use with numerically controlled machine tools, such as lathes and milling machines. A reference position of the machine tool relative to the workpiece along both the X and Y axes is utilized by the control circuit for driving the tool through its program. This reference position is determined for both axes by displacing a single linear variable displacement transducer (LVDT) with the machine tool through a T-shaped pivotal bar. The use of the T-shaped bar allows the cutting tool to be moved sequentially in the X or Y direction for indicating the actual position of the machine tool relative to the predetermined desired position in the numerical control circuit by using a single LVDT.
Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.
2015-01-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633
Howitzer Ammunition System Procurement (HASP).
1991-07-01
machine tools , etc.) * Most critical part of base to reassemble. IPP * Industry to plan round-specific...beyond allowed tolerances. - Conducting tolerance studies and funding machining studies at sul’on "’actors. " Facility development was controlled by the...Manufacturing Balimoy Mfg. of Venice, Inc. Action Manufacturing Co. Lanson Industries Inc. Hercules Aerospace Company CIMA Machine & Tool Co., Inc. Talley Defense Systems Tracor Aerospace Inc. BMY E49030APPBMAC
Computer Aided Process Planning of Machined Metal Parts
1984-09-01
the manufac- turer to accentuate the positive to assist marketing . Machine usage costs and facility loadings are frequently critical. For example...Variant systems currently on the market include Multiplan (TM of OIR, Inc.), CY-Miplan (TM of Computervision), PICAPP (TM of PICAPP, Inc.) and CSD...Multiproduct, Multistage Manufacturing Systems, Journal of Engineering for Industry, ASME, August 1977. Hitomi, K. and I. Ham, Product Mix and Machine Loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanchurin, Vitaly, E-mail: vvanchur@d.umn.edu
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly,more » CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.« less
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Teleoperator Human Factors Study
NASA Technical Reports Server (NTRS)
1986-01-01
An investigation of the spectrum of space teleoperation activities likely in the 1985 to 1995 decade focused on the resolution of critical human engineering issues and characterization of the technology effect on performance of remote human operators. The study began with the identification and documentation of a set of representative reference teleoperator tasks. For each task, technology, development, and design options, issues, and alternatives that bear on human operator performance were defined and categorized. A literature survey identified existing studies of man/machine issues. For each teleoperations category, an assessment was made of the state of knowledge on a scale from adequate to void. The tests, experiments, and analyses necessary to provide the missing elements of knowledge were then defined. A limited set of tests were actually performed, including operator selection, baseline task definition, control mode study, lighting study, camera study, and preliminary time delay study.
An overview of rotating machine systems with high-temperature bulk superconductors
NASA Astrophysics Data System (ADS)
Zhou, Difan; Izumi, Mitsuru; Miki, Motohiro; Felder, Brice; Ida, Tetsuya; Kitano, Masahiro
2012-10-01
The paper contains a review of recent advancements in rotating machines with bulk high-temperature superconductors (HTS). The high critical current density of bulk HTS enables us to design rotating machines with a compact configuration in a practical scheme. The development of an axial-gap-type trapped flux synchronous rotating machine together with the systematic research works at the Tokyo University of Marine Science and Technology since 2001 are briefly introduced. Developments in bulk HTS rotating machines in other research groups are also summarized. The key issues of bulk HTS machines, including material progress of bulk HTS, in situ magnetization, and cooling together with AC loss at low-temperature operation are discussed.
Turning a blind eye: the mobilization of radiology services in resource-poor regions
2010-01-01
While primary care, obstetrical, and surgical services have started to expand in the world's poorest regions, there is only sparse literature on the essential support systems that are required to make these operations function. Diagnostic imaging is critical to effective rural healthcare delivery, yet it has been severely neglected by the academic, public, and private sectors. Currently, a large portion of the world's population lacks access to any form of diagnostic imaging. In this paper we argue that two primary imaging modalities--diagnostic ultrasound and X-Ray--are ideal for rural healthcare services and should be scaled-up in a rapid and standardized manner. Such machines, if designed for resource-poor settings, should a) be robust in harsh environmental conditions, b) function reliably in environments with unstable electricity, c) minimize radiation dangers to staff and patients, d) be operable by non-specialist providers, and e) produce high-quality images required for accurate diagnosis. Few manufacturers are producing ultrasound and X-Ray machines that meet the specifications needed for rural healthcare delivery in resource-poor regions. A coordinated effort is required to create demand sufficient for manufacturers to produce the desired machines and to ensure that the programs operating them are safe, effective, and financially feasible. PMID:20946643
Turning a blind eye: the mobilization of radiology services in resource-poor regions.
Maru, Duncan Smith-Rohrberg; Schwarz, Ryan; Jason, Andrews; Basu, Sanjay; Sharma, Aditya; Moore, Christopher
2010-10-14
While primary care, obstetrical, and surgical services have started to expand in the world's poorest regions, there is only sparse literature on the essential support systems that are required to make these operations function. Diagnostic imaging is critical to effective rural healthcare delivery, yet it has been severely neglected by the academic, public, and private sectors. Currently, a large portion of the world's population lacks access to any form of diagnostic imaging. In this paper we argue that two primary imaging modalities--diagnostic ultrasound and X-Ray--are ideal for rural healthcare services and should be scaled-up in a rapid and standardized manner. Such machines, if designed for resource-poor settings, should a) be robust in harsh environmental conditions, b) function reliably in environments with unstable electricity, c) minimize radiation dangers to staff and patients, d) be operable by non-specialist providers, and e) produce high-quality images required for accurate diagnosis. Few manufacturers are producing ultrasound and X-Ray machines that meet the specifications needed for rural healthcare delivery in resource-poor regions. A coordinated effort is required to create demand sufficient for manufacturers to produce the desired machines and to ensure that the programs operating them are safe, effective, and financially feasible.
NASA Astrophysics Data System (ADS)
Poley, Jack; Dines, Michael
2011-04-01
Wind turbines are frequently located in remote, hard-to-reach locations, making it difficult to apply traditional oil analysis sampling of the machine's critical gearset at timely intervals. Metal detection sensors are excellent candidates for sensors designed to monitor machine condition in vivo. Remotely sited components, such as wind turbines, therefore, can be comfortably monitored from a distance. Online sensor technology has come of age with products now capable of identifying onset of wear in time to avoid or mitigate failure. Online oil analysis is now viable, and can be integrated with onsite testing to vet sensor alarms, as well as traditional oil analysis, as furnished by offsite laboratories. Controlled laboratory research data were gathered from tests conducted on a typical wind turbine gearbox, wherein total ferrous particle measurement and metallic particle counting were employed and monitored. The results were then compared with a physical inspection for wear experienced by the gearset. The efficacy of results discussed herein strongly suggests the viability of metallic wear debris sensors in today's wind turbine gearsets, as correlation between sensor data and machine trauma were very good. By extension, similar components and settings would also seem amenable to wear particle sensor monitoring. To our knowledge no experiments such as described herein, have previously been conducted and published.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
Translations from Kommunist, Number 13, September 1978
1978-10-30
programmed machine tool here is merely a component of a more complex reprogrammable technological system. This includes the robot machine tools with...sufficient possibilities for changing technological operations and processes and automated technological lines. 52 The reprogrammable automated sets will...simulate the possibilities of such sets. A new technological level will be developed in industry related to reprogrammable automated sets, their design
Precision machining of optical surfaces with subaperture correction technologies MRF and IBF
NASA Astrophysics Data System (ADS)
Schmelzer, Olaf; Feldkamp, Roman
2015-10-01
Precision optical elements are used in a wide range of technical instrumentations. Many optical systems e.g. semiconductor inspection modules, laser heads for laser material processing or high end movie cameras, contain precision optics even aspherical or freeform surfaces. Critical parameters for such systems are wavefront error, image field curvature or scattered light. Following these demands the lens parameters are also critical concerning power and RMSi of the surface form error and micro roughness. How can we reach these requirements? The emphasis of this discussion is set on the application of subaperture correction technologies in the fabrication of high-end aspheres and free-forms. The presentation focuses on the technology chain necessary for the production of high-precision aspherical optical components and the characterization of the applied subaperture finishing tools MRF (magneto-rheological finishing) and IBF (ion beam figuring). These technologies open up the possibility of improving the performance of optical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Justin M; Borges, Raymond Charles; Buckner, Mark A
Critical infrastructure Supervisory Control and Data Acquisition (SCADA) systems were designed to operate on closed, proprietary networks where a malicious insider posed the greatest threat potential. The centralization of control and the movement towards open systems and standards has improved the efficiency of industrial control, but has also exposed legacy SCADA systems to security threats that they were not designed to mitigate. This work explores the viability of machine learning methods in detecting the new threat scenarios of command and data injection. Similar to network intrusion detection systems in the cyber security domain, the command and control communications in amore » critical infrastructure setting are monitored, and vetted against examples of benign and malicious command traffic, in order to identify potential attack events. Multiple learning methods are evaluated using a dataset of Remote Terminal Unit communications, which included both normal operations and instances of command and data injection attack scenarios.« less
Methods for consistent forewarning of critical events across multiple data channels
Hively, Lee M.
2006-11-21
This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.
New method for measuring the laser-induced damage threshold of optical thin film
NASA Astrophysics Data System (ADS)
Su, Jun-hong; Wang, Hong; Xi, Ying-xue
2012-10-01
The laser-induced damage threshold (LIDT) of thin film means that the thin film can withstand a maximum intensity of laser radiation. The film will be damaged when the irradiation under high laser intensity is greater than the value of LIDT. In this paper, an experimental platform with measurement operator interfaces and control procedures in the VB circumstance is built according to ISO11254-1. In order to obtain more accurate results than that with manual measurement, in the software system, a hardware device can be controlled by control widget on the operator interfaces. According to the sample characteristic, critical parameters of the LIDT measurement system such as spot diameter, damage threshold region, and critical damage pixel number are set up on the man-machine conversation interface, which could realize intelligent measurements of the LIDT. According to experimental data, the LIDT is obtained by fitting damage curve automatically.
Periodical capacity setting methods for make-to-order multi-machine production systems
Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert
2014-01-01
The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649
Evaluation of an Integrated Multi-Task Machine Learning System with Humans in the Loop
2007-01-01
machine learning components natural language processing, and optimization...was examined with a test explicitly developed to measure the impact of integrated machine learning when used by a human user in a real world setting...study revealed that integrated machine learning does produce a positive impact on overall performance. This paper also discusses how specific machine learning components contributed to human-system
University of Maryland walking robot: A design project for undergraduate students
NASA Technical Reports Server (NTRS)
Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom
1990-01-01
The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.
Code of Federal Regulations, 2010 CFR
2010-01-01
... washing and drycleaning procedures can safely be used on a product: (1) Machine washing in hot water; (2) Machine drying at a high setting; (3) Ironing at a hot setting; (4) Bleaching with all commercially... National Archives and Records Administration (NARA). For information on the availability of this material...
Code of Federal Regulations, 2011 CFR
2011-01-01
... washing and drycleaning procedures can safely be used on a product: (1) Machine washing in hot water; (2) Machine drying at a high setting; (3) Ironing at a hot setting; (4) Bleaching with all commercially... National Archives and Records Administration (NARA). For information on the availability of this material...
Lawrence, Sally; Boyle, Maria; Craypo, Lisa; Samuels, Sarah
2009-06-01
Little has been done to ensure that the foods sold within health care facilities promote healthy lifestyles. Policies to improve school nutrition environments can serve as models for health care organizations. This study was designed to assess the healthfulness of foods sold in health care facility vending machines as well as how health care organizations are using policies to create healthy food environments. Food and beverage assessments were conducted in 19 California health care facilities that serve children in the Healthy Eating, Active Communities sites. Items sold in vending machines were inventoried at each facility and interviews conducted for information on vending policies. Analyses examined the types of products sold and the healthfulness of these products. Ninety-six vending machines were observed in 15 (79%) of the facilities. Hospitals averaged 9.3 vending machines per facility compared with 3 vending machines per health department and 1.4 per clinic. Sodas comprised the greatest percentage of all beverages offered for sale: 30% in hospital vending machines and 38% in clinic vending machines. Water (20%) was the most prevalent in health departments. Candy comprised the greatest percentage of all foods offered in vending machines: 31% in clinics, 24% in hospitals, and 20% in health department facilities. Across all facilities, 75% of beverages and 81% of foods sold in vending machines did not adhere to the California school nutrition standards (Senate Bill 12). Nine (47%) of the health care facilities had adopted, or were in the process of adopting, policies that set nutrition standards for vending machines. According to the California school nutrition standards, the majority of items found in the vending machines in participating health care facilities were unhealthy. Consumption of sweetened beverages and high-energy-density foods has been linked to increased prevalence of obesity. Some health care facilities are developing policies that set nutrition standards for vending machines. These policies could be effective in increasing access to healthy foods and beverages in institutional settings.
Progress in machine consciousness.
Gamez, David
2008-09-01
This paper is a review of the work that has been carried out on machine consciousness. A clear overview of this diverse field is achieved by breaking machine consciousness down into four different areas, which are used to understand its aims, discuss its relationship with other subjects and outline the work that has been carried out so far. The criticisms that have been made against machine consciousness are also covered, along with its potential benefits, and the work that has been done on analysing systems for signs of consciousness. Some of the social and ethical issues raised by machine consciousness are examined at the end of the paper.
NASA Technical Reports Server (NTRS)
Malone, T. B.; Micocci, A.
1975-01-01
The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.
Splendidly blended: a machine learning set up for CDU control
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2017-06-01
As the concepts of machine learning and artificial intelligence continue to grow in importance in the context of internet related applications it is still in its infancy when it comes to process control within the semiconductor industry. Especially the branch of mask manufacturing presents a challenge to the concepts of machine learning since the business process intrinsically induces pronounced product variability on the background of small plate numbers. In this paper we present the architectural set up of a machine learning algorithm which successfully deals with the demands and pitfalls of mask manufacturing. A detailed motivation of this basic set up followed by an analysis of its statistical properties is given. The machine learning set up for mask manufacturing involves two learning steps: an initial step which identifies and classifies the basic global CD patterns of a process. These results form the basis for the extraction of an optimized training set via balanced sampling. A second learning step uses this training set to obtain the local as well as global CD relationships induced by the manufacturing process. Using two production motivated examples we show how this approach is flexible and powerful enough to deal with the exacting demands of mask manufacturing. In one example we show how dedicated covariates can be used in conjunction with increased spatial resolution of the CD map model in order to deal with pathological CD effects at the mask boundary. The other example shows how the model set up enables strategies for dealing tool specific CD signature differences. In this case the balanced sampling enables a process control scheme which allows usage of the full tool park within the specified tight tolerance budget. Overall, this paper shows that the current rapid developments off the machine learning algorithms can be successfully used within the context of semiconductor manufacturing.
Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Li, Li-Ping; Huang, De-Shuang; Yan, Gui-Ying; Nie, Ru; Huang, Yu-An
2017-04-04
Identification of protein-protein interactions (PPIs) is of critical importance for deciphering the underlying mechanisms of almost all biological processes of cell and providing great insight into the study of human disease. Although much effort has been devoted to identifying PPIs from various organisms, existing high-throughput biological techniques are time-consuming, expensive, and have high false positive and negative results. Thus it is highly urgent to develop in silico methods to predict PPIs efficiently and accurately in this post genomic era. In this article, we report a novel computational model combining our newly developed discriminative vector machine classifier (DVM) and an improved Weber local descriptor (IWLD) for the prediction of PPIs. Two components, differential excitation and orientation, are exploited to build evolutionary features for each protein sequence. The main characteristics of the proposed method lies in introducing an effective feature descriptor IWLD which can capture highly discriminative evolutionary information from position-specific scoring matrixes (PSSM) of protein data, and employing the powerful and robust DVM classifier. When applying the proposed method to Yeast and H. pylori data sets, we obtained excellent prediction accuracies as high as 96.52% and 91.80%, respectively, which are significantly better than the previous methods. Extensive experiments were then performed for predicting cross-species PPIs and the predictive results were also pretty promising. To further validate the performance of the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier on Human data set. The experimental results obtained indicate that our method is highly effective for PPIs prediction and can be taken as a supplementary tool for future proteomics research.
Multiparticle Solutions in 2+1 Gravity and Time Machines
NASA Astrophysics Data System (ADS)
Steif, Alan R.
Multiparticle solutions for sources moving at the speed of light and corresponding to superpositions of single-particle plane-wave solutions are constructed in 2+1 gravity. It is shown that the two-particle spacetimes admit closed timelike curves provided the center-of-momentum energy exceeds a certain critical value. This occurs, however, at the cost of unphysical boundary conditions which are analogous to those affecting Gott’s time machine. As the energy exceeds the critical value, the closed timelike curves first occur at spatial infinity, then migrate inward as the energy is further increased. The total mass of the system also becomes imaginary for particle energies greater than the critical value.
Unbounded orbits of a swinging Atwood's machine
NASA Astrophysics Data System (ADS)
Tufillaro, N.; Nunes, A.; Casasayas, J.
1988-12-01
The motion of a swinging Atwood's machine is examined when the orbits are unbounded. Expressions for the asymptotic behavior of the orbits are derived that exhibit either an infinite number of oscillations or no oscillations, depending only on a critical value of the mass ratio.
Positive-unlabeled learning for disease gene identification
Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong
2012-01-01
Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290
The Single Needle Lockstitch Machine. [Setting Zippers.] Module 8.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Office of Vocational Education.
This module on setting zippers, one in a series on the single needle lockstitch sewing machine for student self-study, contains five sections. Each section includes the following parts: an introduction, directions, an objective, learning activities, student information, student self-check, check-out activities, and an instructor's final checklist.…
Lise, Stefano; Archambeau, Cedric; Pontil, Massimiliano; Jones, David T
2009-10-30
Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (DeltaDeltaG) measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots") at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which DeltaDeltaG >or= 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been applied separately to biomolecular problems, the results of our investigation indicate that there are substantial benefits to be gained by their integration.
Safety of stationary grinding machines - impact resistance of work zone enclosures.
Mewes, Detlef; Adler, Christian
2017-09-01
Guards on machine tools are intended to protect persons from being injured by parts ejected with high kinetic energy from the work zone of the machine. Stationary grinding machines are a typical example. Generally such machines are provided with abrasive product guards closely enveloping the grinding wheel. However, many machining tasks do not allow the use of abrasive product guards. In such cases, the work zone enclosure has to be dimensioned so that, in case of failure, grinding wheel fragments remain inside the machine's working zone. To obtain data for the dimensioning of work zone enclosures on stationary grinding machines, which must be operated without an abrasive product guard, burst tests were conducted with vitrified grinding wheels. The studies show that, contrary to widely held opinion, narrower grinding wheels can be more critical concerning the impact resistance than wider wheels although their fragment energy is smaller.
Engelhardt, Alexander; Kanawade, Rajesh; Knipfer, Christian; Schmid, Matthias; Stelzle, Florian; Adler, Werner
2014-07-16
In the field of oral and maxillofacial surgery, newly developed laser scalpels have multiple advantages over traditional metal scalpels. However, they lack haptic feedback. This is dangerous near e.g. nerve tissue, which has to be preserved during surgery. One solution to this problem is to train an algorithm that analyzes the reflected light spectra during surgery and can classify these spectra into different tissue types, in order to ultimately send a warning or temporarily switch off the laser when critical tissue is about to be ablated. Various machine learning algorithms are available for this task, but a detailed analysis is needed to assess the most appropriate algorithm. In this study, a small data set is used to simulate many larger data sets according to a multivariate Gaussian distribution. Various machine learning algorithms are then trained and evaluated on these data sets. The algorithms' performance is subsequently evaluated and compared by averaged confusion matrices and ultimately by boxplots of misclassification rates. The results are validated on the smaller, experimental data set. Most classifiers have a median misclassification rate below 0.25 in the simulated data. The most notable performance was observed for the Penalized Discriminant Analysis, with a misclassifiaction rate of 0.00 in the simulated data, and an average misclassification rate of 0.02 in a 10-fold cross validation on the original data. The results suggest a Penalized Discriminant Analysis is the most promising approach, most probably because it considers the functional, correlated nature of the reflectance spectra.The results of this study improve the accuracy of real-time tissue discrimination and are an essential step towards improving the safety of oral laser surgery.
Integrated human-machine intelligence in space systems.
Boy, G A
1992-07-01
This paper presents an artificial intelligence approach to integrated human-machine intelligence in space systems. It discusses the motivations for Intelligent Assistant Systems in both nominal and abnormal situations. The problem of constructing procedures is shown to be a very critical issue. In particular, keeping procedural experience in both design and operation is critical. We suggest what artificial intelligence can offer in this direction. Some crucial problems induced by this approach are discussed in detail. Finally, we analyze the various roles that would be shared by both astronauts, ground operators, and the intelligent assistant system.
Machine learning of molecular properties: Locality and active learning
NASA Astrophysics Data System (ADS)
Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.
2018-06-01
In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.
Novel nonlinear knowledge-based mean force potentials based on machine learning.
Dong, Qiwen; Zhou, Shuigeng
2011-01-01
The prediction of 3D structures of proteins from amino acid sequences is one of the most challenging problems in molecular biology. An essential task for solving this problem with coarse-grained models is to deduce effective interaction potentials. The development and evaluation of new energy functions is critical to accurately modeling the properties of biological macromolecules. Knowledge-based mean force potentials are derived from statistical analysis of proteins of known structures. Current knowledge-based potentials are almost in the form of weighted linear sum of interaction pairs. In this study, a class of novel nonlinear knowledge-based mean force potentials is presented. The potential parameters are obtained by nonlinear classifiers, instead of relative frequencies of interaction pairs against a reference state or linear classifiers. The support vector machine is used to derive the potential parameters on data sets that contain both native structures and decoy structures. Five knowledge-based mean force Boltzmann-based or linear potentials are introduced and their corresponding nonlinear potentials are implemented. They are the DIH potential (single-body residue-level Boltzmann-based potential), the DFIRE-SCM potential (two-body residue-level Boltzmann-based potential), the FS potential (two-body atom-level Boltzmann-based potential), the HR potential (two-body residue-level linear potential), and the T32S3 potential (two-body atom-level linear potential). Experiments are performed on well-established decoy sets, including the LKF data set, the CASP7 data set, and the Decoys “R”Us data set. The evaluation metrics include the energy Z score and the ability of each potential to discriminate native structures from a set of decoy structures. Experimental results show that all nonlinear potentials significantly outperform the corresponding Boltzmann-based or linear potentials, and the proposed discriminative framework is effective in developing knowledge-based mean force potentials. The nonlinear potentials can be widely used for ab initio protein structure prediction, model quality assessment, protein docking, and other challenging problems in computational biology.
Owen, Whitney H.
1980-01-01
A polyphase rotary induction machine for use as a motor or generator utilizing a single rotor assembly having two series connected sets of rotor windings, a first stator winding disposed around the first rotor winding and means for controlling the current induced in one set of the rotor windings compared to the current induced in the other set of the rotor windings. The rotor windings may be wound rotor windings or squirrel cage windings.
NASA Astrophysics Data System (ADS)
Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas
1990-05-01
There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.
The Efficacy of Machine Learning Programs for Navy Manpower Analysis
1993-03-01
This thesis investigated the efficacy of two machine learning programs for Navy manpower analysis. Two machine learning programs, AIM and IXL, were...to generate models from the two commercial machine learning programs. Using a held out sub-set of the data the capabilities of the three models were...partial effects. The author recommended further investigation of AIM’s capabilities, and testing in an operational environment.... Machine learning , AIM, IXL.
Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.
Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912
Code of Federal Regulations, 2010 CFR
2010-07-01
... vending facilities, including vending machines, on property controlled by the Department of the Treasury... States. Treasury bureaus shall ensure that the collection and distribution of vending machine income from vending machines on Treasury-controlled property shall be in compliance with the regulations set forth in...
Wade, Matthew; Isom, Ryan; Georgescu, Dan; Olson, Randall J
2007-06-01
To determine the efficacy of the Cruise Control surge-limiting device (Staar Surgical) with phacoemulsification machines known to have high levels of surge. John A. Moran Eye Center Clinical Laboratories. In an in vitro study, postocclusion anterior chamber depth changes were measured in fresh phakic human eye-bank eyes using the Alcon Legacy and Bausch & Lomb Millennium venturi machines in conjunction with the Staar Cruise Control device. Both machines were tested with 19-gauge non-Aspiration Bypass System tips at high-surge settings (500 mm Hg vacuum pressure, 75 cm bottle height, 40 mL/min flow rate for the Legacy) and low-surge settings (400 mm Hg vacuum pressure, 125 cm bottle height, 40 mL/min flow rate for the Legacy). Adjusted parameters of flow, vacuum, and irrigation were used based on previous studies to create identical conditions for each device tested. The effect of the Cruise Control device on aspiration rates was also tested with both machines at the low-surge settings. At the high setting with the addition of Cruise Control, surge decreased significantly with the Legacy but was too large to measure with the Millennium venturi. At the low setting with the addition of Cruise Control, surge decreased significantly with both machines. Surge with the Millennium decreased from more than 1.0 mm to a mean of 0.21 mm +/- 0.02 (SD) (P<.0001). Surge with the Legacy decreased from a mean of 0.09 +/- 0.02 mm to 0.05 +/- 0 mm, a 42.9% decrease (P<.0001). The Millennium had the highest surge and aspiration rate before Cruise Control and the greatest percentage decrease in the surge and aspiration rates as a result of the addition of Cruise Control. In the Legacy machine, the Cruise Control device had a statistically and clinically significant effect. Cruise Control had a large effect on fluidics as well as surge amplitude with the Millennium machine. The greater the flow or greater the initial surge, the greater the impact of the Cruise Control device.
Automatic Earthquake Detection by Active Learning
NASA Astrophysics Data System (ADS)
Bergen, K.; Beroza, G. C.
2017-12-01
In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.
Reverse engineering of machine-tool settings with modified roll for spiral bevel pinions
NASA Astrophysics Data System (ADS)
Liu, Guanglei; Chang, Kai; Liu, Zeliang
2013-05-01
Although a great deal of research has been dedicated to the synthesis of spiral bevel gears, little related to reverse engineering can be found. An approach is proposed to reverse the machine-tool settings of the pinion of a spiral bevel gear drive on the basis of the blank and tooth surface data obtained by a coordinate measuring machine(CMM). Real tooth contact analysis(RTCA) is performed to preliminary ascertain the contact pattern, the motion curve, as well as the position of the mean contact point. And then the tangent to the contact path and the motion curve are interpolated in the sense of the least square method to extract the initial values of the bias angle and the higher order coefficients(HOC) in modified roll motion. A trial tooth surface is generated by machine-tool settings derived from the local synthesis relating to the initial meshing performances and modified roll motion. An optimization objective is formed which equals the tooth surface deviation between the real tooth surface and the trial tooth surface. The design variables are the parameters describing the meshing performances at the mean contact point in addition to the HOC. When the objective is optimized within an arbitrarily given convergence tolerance, the machine-tool settings together with the HOC are obtained. The proposed approach is verified by a spiral bevel pinion used in the accessory gear box of an aviation engine. The trial tooth surfaces approach to the real tooth surface on the whole in the example. The results show that the convergent tooth surface deviation for the concave side on the average is less than 0.5 μm, and is less than 1.3 μm for the convex side. The biggest tooth surface deviation is 6.7 μm which is located at the corner of the grid on the convex side. Those nodes with relative bigger tooth surface deviations are all located at the boundary of the grid. An approach is proposed to figure out the machine-tool settings of a spiral bevel pinion by way of reverse engineering without having known the theoretical tooth surfaces and the corresponding machine-tool settings.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Quantum Machine Learning over Infinite Dimensions
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George; ...
2017-02-21
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
Quantum Machine Learning over Infinite Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
Chatter active control in a lathe machine using magnetostrictive actuator
NASA Astrophysics Data System (ADS)
Nosouhi, R.; Behbahani, S.
2011-01-01
This paper analyzes the chatter phenomena in lathe machines. Chatter is one of the main causes of inaccuracy, reduction of life cycle of the machine and tool wear in machine tools. This phenomenon limits the depth of cut as a function of the cutting speed, which consequently reduces the material removal rate and machining efficiency. Chatter control is therefore important since it increases the stability region in machining and increases the critical depth of cut in machining case. To control the chatter in lathe machines, a magnetostrictive actuator is used. The materials with magnetostriction properties are kind of smart materials of which their length changes as a result of applying an exterior magnetic field, which make them suitable for control applications. It is assumed that the actuator applies the proper force exactly at the point where the machining force is applied on the tool. In this paper the chatter stability lobes is excelled as a result of applying a PID controller on the magnetostrictive actuator equipped-tool in turning.
Learning Simple Machines through Cross-Age Collaborations
ERIC Educational Resources Information Center
Lancor, Rachael; Schiebel, Amy
2008-01-01
In this project, introductory college physics students (noneducation majors) were asked to teach simple machines to a class of second graders. This nontraditional activity proved to be a successful way to encourage college students to think critically about physics and how it applied to their everyday lives. The noneducation majors benefited by…
Chunk Alignment for Corpus-Based Machine Translation
ERIC Educational Resources Information Center
Kim, Jae Dong
2011-01-01
Since sub-sentential alignment is critically important to the translation quality of an Example-Based Machine Translation (EBMT) system, which operates by finding and combining phrase-level matches against the training examples, we developed a new alignment algorithm for the purpose of improving the EBMT system's performance. This new…
Morrow, S A; Bates, P E
1987-01-01
This study examined the effectiveness of three sets of school-based instructional materials and community training on acquisition and generalization of a community laundry skill by nine students with severe handicaps. School-based instruction involved artificial materials (pictures), simulated materials (cardboard replica of a community washing machine), and natural materials (modified home model washing machine). Generalization assessments were conducted at two different community laundromats, on two machines represented fully by the school-based instructional materials and two machines not represented fully by these materials. After three phases of school-based instruction, the students were provided ten community training trials in one laundromat setting and a final assessment was conducted in both the trained and untrained community settings. A multiple probe design across students was used to evaluate the effectiveness of the three types of school instruction and community training. After systematic training, most of the students increased their laundry performance with all three sets of school-based materials; however, generalization of these acquired skills was limited in the two community settings. Direct training in one of the community settings resulted in more efficient acquisition of the laundry skills and enhanced generalization to the untrained laundromat setting for most of the students. Results of this study are discussed in regard to the issue of school versus community-based instruction and recommendations are made for future research in this area.
Efstathiou, Jason A; Heunis, Magda; Karumekayi, Talkmore; Makufa, Remigio; Bvochora-Nsingo, Memory; Gierga, David P; Suneja, Gita; Grover, Surbhi; Kasese, Joseph; Mmalane, Mompati; Moffat, Howard; von Paleske, Alexander; Makhema, Joseph; Dryden-Peterson, Scott
2016-01-01
There is a global cancer crisis, and it is disproportionately affecting resource-constrained settings, especially in low- and middle-income countries (LMICs). Radiotherapy is a critical and cost-effective component of a comprehensive cancer control plan that offers the potential for cure, control, and palliation of disease in greater than 50% of patients with cancer. Globally, LMICs do not have adequate access to quality radiation therapy and this gap is particularly pronounced in sub-Saharan Africa. Although there are numerous challenges in implementing a radiation therapy program in a low-resource setting, providing more equitable global access to radiotherapy is a responsibility and investment worth prioritizing. We outline a systems approach and a series of key questions to direct strategy toward establishing quality radiation services in LMICs, and highlight the story of private-public investment in Botswana from the late 1990s to the present. After assessing the need and defining the value of radiation, we explore core investments required, barriers that need to be overcome, and assets that can be leveraged to establish a radiation program. Considerations addressed include infrastructure; machine choice; quality assurance and patient safety; acquisition, development, and retention of human capital; governmental engagement; public-private partnerships; international collaborations; and the need to critically evaluate the program to foster further growth and sustainability. © 2015 by American Society of Clinical Oncology.
Machine learning and data science in soft materials engineering
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.
2018-01-01
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Machine learning and data science in soft materials engineering.
Ferguson, Andrew L
2018-01-31
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Ge, Tian; Nichols, Thomas E; Ghosh, Debashis; Mormino, Elizabeth C; Smoller, Jordan W; Sabuncu, Mert R
2015-04-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of the interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to the data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. Copyright © 2015 Elsevier Inc. All rights reserved.
Management Perspectives Pertaining to Root Cause Analyses of Nunn-McCurdy Breaches. Volume 4
2013-01-01
the FY2012 NDAA, the Army revised its initial budget request, allocating money from the purchase of new M2 .50 caliber machine guns to the...Quick-change machine gun barrel Explosive reactive armor Linear demolition charge system Full width, surface mine ploughs On-board vehicle power...Quantity Oversight of ACAT II Programs 45 for a restart to the program citing a “critical shortage of serviceable machine guns for our Soldiers who
National machine guarding program: Part 2. Safety management in small metal fabrication enterprises
Yamin, Samuel C.; Brosseau, Lisa M.; Xi, Min; Gordon, Robert; Most, Ivan G.; Stanley, Rodney
2015-01-01
Background Small manufacturing businesses often lack important safety programs. Many reasons have been set forth on why this has remained a persistent problem. Methods The National Machine Guarding Program (NMGP) was a nationwide intervention conducted in partnership with two workers' compensation insurers. Insurance safety consultants collected baseline data in 221 business using a 33‐question safety management audit. Audits were completed during an interview with the business owner or manager. Results Most measures of safety management improved with an increasing number of employees. This trend was particularly strong for lockout/tagout. However, size was only significant for businesses without a safety committee. Establishments with a safety committee scored higher (55% vs. 36%) on the safety management audit compared with those lacking a committee (P < 0.0001). Conclusions Critical safety management programs were frequently absent. A safety committee appears to be a more important factor than business size in accounting for differences in outcome measures. Am. J. Ind. Med. 58:1184–1193, 2015. © 2015 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc. PMID:26345591
NASA Astrophysics Data System (ADS)
Nguyen, Minh Q.; Allebach, Jan P.
2015-01-01
In our previous work1 , we presented a block-based technique to analyze printed page uniformity both visually and metrically. The features learned from the models were then employed in a Support Vector Machine (SVM) framework to classify the pages into one of the two categories of acceptable and unacceptable quality. In this paper, we introduce a set of tools for machine learning in the assessment of printed page uniformity. This work is primarily targeted to the printing industry, specifically the ubiquitous laser, electrophotographic printer. We use features that are well-correlated with the rankings of expert observers to develop a novel machine learning framework that allows one to achieve the minimum "false alarm" rate, subject to a chosen "miss" rate. Surprisingly, most of the research that has been conducted on machine learning does not consider this framework. During the process of developing a new product, test engineers will print hundreds of test pages, which can be scanned and then analyzed by an autonomous algorithm. Among these pages, most may be of acceptable quality. The objective is to find the ones that are not. These will provide critically important information to systems designers, regarding issues that need to be addressed in improving the printer design. A "miss" is defined to be a page that is not of acceptable quality to an expert observer that the prediction algorithm declares to be a "pass". Misses are a serious problem, since they represent problems that will not be seen by the systems designers. On the other hand, "false alarms" correspond to pages that an expert observer would declare to be of acceptable quality, but which are flagged by the prediction algorithm as "fails". In a typical printer testing and development scenario, such pages would be examined by an expert, and found to be of acceptable quality after all. "False alarm" pages result in extra pages to be examined by expert observers, which increases labor cost. But "false alarms" are not nearly as catastrophic as "misses", which represent potentially serious problems that are never seen by the systems developers. This scenario motivates us to develop a machine learning framework that will achieve the minimum "false alarm" rate subject to a specified "miss" rate. In order to construct such a set of receiver operating characteristic2 (ROC) curves, we examine various tools for the prediction, ranging from an exhaustive search over the space of the nonlinear discriminants to a Cost-Sentitive SVM3 framework. We then compare the curves gained from those methods. Our work shows promise for applying a standard framework to obtain a full ROC curve when it comes to tackling other machine learning problems in industry.
Plant MicroRNA Prediction by Supervised Machine Learning Using C5.0 Decision Trees.
Williams, Philip H; Eyles, Rod; Weiller, Georg
2012-01-01
MicroRNAs (miRNAs) are nonprotein coding RNAs between 20 and 22 nucleotides long that attenuate protein production. Different types of sequence data are being investigated for novel miRNAs, including genomic and transcriptomic sequences. A variety of machine learning methods have successfully predicted miRNA precursors, mature miRNAs, and other nonprotein coding sequences. MirTools, mirDeep2, and miRanalyzer require "read count" to be included with the input sequences, which restricts their use to deep-sequencing data. Our aim was to train a predictor using a cross-section of different species to accurately predict miRNAs outside the training set. We wanted a system that did not require read-count for prediction and could therefore be applied to short sequences extracted from genomic, EST, or RNA-seq sources. A miRNA-predictive decision-tree model has been developed by supervised machine learning. It only requires that the corresponding genome or transcriptome is available within a sequence window that includes the precursor candidate so that the required sequence features can be collected. Some of the most critical features for training the predictor are the miRNA:miRNA(∗) duplex energy and the number of mismatches in the duplex. We present a cross-species plant miRNA predictor with 84.08% sensitivity and 98.53% specificity based on rigorous testing by leave-one-out validation.
The influence of negative training set size on machine learning-based virtual screening.
Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J
2014-01-01
The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.
The influence of negative training set size on machine learning-based virtual screening
2014-01-01
Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
Design and Development of an Engineering Prototype Compact X-Ray Scanner (FMS 5000)
1989-03-31
machined by "wire-EDM" (electro discharge machining ). Three different slice thicknesses can be selected from the scan menu. The set of slice thicknesses...circuit. This type of circuit is used whenever more than ten kilowatts of power are needed by a machine . For example, lathes and milling machines in a... machine shop usually use this type of input power. A three- phase circuit delivers power more efficiently than a single-phase circuit because three
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
ERIC Educational Resources Information Center
Air Univ., Gunter AFS, Ala. Extension Course Inst.
This four-volume student text is designed for use by Air Force personnel enrolled in a self-study extension course for machinists. Covered in the individual volumes are machine shop fundamentals, metallurgy and advanced machine work, advanced machine work, and tool design and shop management. Each volume in the set contains a series of lessons,…
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Predicting a small molecule-kinase interaction map: A machine learning approach
2011-01-01
Background We present a machine learning approach to the problem of protein ligand interaction prediction. We focus on a set of binding data obtained from 113 different protein kinases and 20 inhibitors. It was attained through ATP site-dependent binding competition assays and constitutes the first available dataset of this kind. We extract information about the investigated molecules from various data sources to obtain an informative set of features. Results A Support Vector Machine (SVM) as well as a decision tree algorithm (C5/See5) is used to learn models based on the available features which in turn can be used for the classification of new kinase-inhibitor pair test instances. We evaluate our approach using different feature sets and parameter settings for the employed classifiers. Moreover, the paper introduces a new way of evaluating predictions in such a setting, where different amounts of information about the binding partners can be assumed to be available for training. Results on an external test set are also provided. Conclusions In most of the cases, the presented approach clearly outperforms the baseline methods used for comparison. Experimental results indicate that the applied machine learning methods are able to detect a signal in the data and predict binding affinity to some extent. For SVMs, the binding prediction can be improved significantly by using features that describe the active site of a kinase. For C5, besides diversity in the feature set, alignment scores of conserved regions turned out to be very useful. PMID:21708012
Prioritizing individual genetic variants after kernel machine testing using variable selection.
He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C
2016-12-01
Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
Anarchism & Educational Policy Studies; A Marxist View of Joel Spring's "The Sorting Machine."
ERIC Educational Resources Information Center
Berlowitz, Marvin J.
A critical analysis and interpretation of "The Sorting Machine" by Joel H. Spring is presented. The book, which uses a historical revisionist approach to trace the development and impact of the corporate-government-foundation network on the ideological orientation of the American educational system, makes its greatest contribution by…
Visual feedback system to reduce errors while operating roof bolting machines
Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim
2015-01-01
Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703
High productivity machining of holes in Inconel 718 with SiAlON tools
NASA Astrophysics Data System (ADS)
Agirreurreta, Aitor Arruti; Pelegay, Jose Angel; Arrazola, Pedro Jose; Ørskov, Klaus Bonde
2016-10-01
Inconel 718 is often employed in aerospace engines and power generation turbines. Numerous researches have proven the enhanced productivity when turning with ceramic tools compared to carbide ones, however there is considerably less information with regard to milling. Moreover, no knowledge has been published about machining holes with this type of tools. Additional research on different machining techniques, like for instance circular ramping, is critical to expand the productivity improvements that ceramics can offer. In this a 3D model of the machining and a number of experiments with SiAlON round inserts have been carried out in order to evaluate the effect of the cutting speed and pitch on the tool wear and chip generation. The results of this analysis show that three different types of chips are generated and also that there are three potential wear zones. Top slice wear is identified as the most critical wear type followed by the notch wear as a secondary wear mechanism. Flank wear and adhesion are also found in most of the tests.
Critical Technology Assessment of Five Axis Simultaneous Control Machine Tools
2009-07-01
assessment, BIS specifically examined: • The application of Export Control Classification Numbers ( ECCN ) 2B001.b.2 and 2B001.c.2 controls and related...availability of certain five axis simultaneous control mills, mill/turns, and machining centers controlled by ECCN 2B001.b.2 (but not grinders controlled by... ECCN 2B001.c.2) exists to China and Taiwan, which both have an indigenous capability to produce five axis simultaneous control machine tools with
Intelligent image processing for machine safety
NASA Astrophysics Data System (ADS)
Harvey, Dennis N.
1994-10-01
This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.
Winding Schemes for Wide Constant Power Range of Double Stator Transverse Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Husain, Tausif; Hassan, Iftekhar; Sozer, Yilmaz
2015-05-01
Different ring winding schemes for double sided transverse flux machines are investigated in this paper for wide speed operation. The windings under investigation are based on two inverters used in parallel. At higher power applications this arrangement improves the drive efficiency. The new winding structure through manipulation of the end connection splits individual sets into two and connects the partitioned turns from individual stator sets in series. This configuration offers the flexibility of torque profiling and a greater flux weakening region. At low speeds and low torque only one winding set is capable of providing the required torque thus providingmore » greater fault tolerance. At higher speeds one set is dedicated to torque production and the other for flux control. The proposed method improves the machine efficiency and allows better flux weakening which is desirable for traction applications.« less
Application of Elements of TPM Strategy for Operation Analysis of Mining Machine
NASA Astrophysics Data System (ADS)
Brodny, Jaroslaw; Tutak, Magdalena
2017-12-01
Total Productive Maintenance (TPM) strategy includes group of activities and actions in order to maintenance machines in failure-free state and without breakdowns thanks to tending limitation of failures, non-planned shutdowns, lacks and non-planned service of machines. These actions are ordered to increase effectiveness of utilization of possessed devices and machines in company. Very significant element of this strategy is connection of technical actions with changes in their perception by employees. Whereas fundamental aim of introduction this strategy is improvement of economic efficiency of enterprise. Increasing competition and necessity of reduction of production costs causes that also mining enterprises are forced to introduce this strategy. In the paper examples of use of OEE model for quantitative evaluation of selected mining devices were presented. OEE model is quantitative tool of TPM strategy and can be the base for further works connected with its introduction. OEE indicator is the product of three components which include availability and performance of the studied machine and the quality of the obtained product. The paper presents the results of the effectiveness analysis of the use of a set of mining machines included in the longwall system, which is the first and most important link in the technological line of coal production. The set of analyzed machines included the longwall shearer, armored face conveyor and cruscher. From a reliability point of view, the analyzed set of machines is a system that is characterized by the serial structure. The analysis was based on data recorded by the industrial automation system used in the mines. This method of data acquisition ensured their high credibility and a full time synchronization. Conclusions from the research and analyses should be used to reduce breakdowns, failures and unplanned downtime, increase performance and improve production quality.
Machine learning in genetics and genomics
Libbrecht, Maxwell W.; Noble, William Stafford
2016-01-01
The field of machine learning promises to enable computers to assist humans in making sense of large, complex data sets. In this review, we outline some of the main applications of machine learning to genetic and genomic data. In the process, we identify some recurrent challenges associated with this type of analysis and provide general guidelines to assist in the practical application of machine learning to real genetic and genomic data. PMID:25948244
NASA Astrophysics Data System (ADS)
Rybczyński, Józef
2011-02-01
This paper presents the results of computer simulation of bearing misalignment defects in a power turbogenerator. This malfunction is typical for great multi-rotor and multi-bearing rotating machines and very common in power turbo-sets. Necessary calculations were carried out by the computer code system MESWIR, developed and used at the IFFM in Gdansk for calculating dynamics of rotors supported on oil bearings. The results are presented in the form of a set of journal and bush trajectories of all turbo-set bearings. Our analysis focuses on the vibrational effects of displacing the two most vulnerable machine bearings in horizontal and vertical directions by the maximum acceptable range calculated with regard to bearing vibration criterion. This assumption required preliminary assessment of the maximum values for the permissible bearing dislocations. We show the relations between the attributes of the particular bearing trajectories and the bearing displacements in relation to their base design position. The shape and dimensions of bearing trajectories are interpreted based on the theory of hydrodynamic lubrication of oil bearings. It was shown that the relative journal trajectories and absolute bush trajectories carry much important information about the dynamic state of the machine, indicating also the way in which bearings are loaded. Therefore, trajectories can be a source of information about the position and direction of bearing misalignments. This article indicates the potential of using trajectory patterns for diagnosing misalignment defects in rotating machines and suggests including sets of trajectory patterns to the knowledge base of a machine diagnostic system.
Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E
2013-07-01
To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.
Pricing and Promotion Effects on Low-Fat Vending Snack Purchases: The CHIPS Study.
ERIC Educational Resources Information Center
French, Simone A.; Jeffery, Robert W.; Story, Mary; Breitlow, Kyle K.; Baxter, Judith S.; Hannan, Peter; Snyder, M. Patricia
2001-01-01
Examined the effects of pricing and promotion strategies on purchases of low-fat snacks from vending machines set up at secondary schools and worksites in Minnesota. Analysis of sales data indicated that reducing relative prices on low-fat snacks was very effective in promoting lower-fat snack purchases from vending machines in both settings. (SM)
A Suggested Set of Job and Task Sheets for Machine Shop Training.
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Vocational Instructional Services.
This set of job and task sheets consists of three multi-part jobs that are adaptable for use in regular vocational industrial education programs for training machinists and machine shop operators. After completing the sheets included in this volume, students should be able to construct a planer jack, a radius cutter, and a surface gage. Each job…
The effect of CNC and manual laser machining on electrical resistance of HDPE/MWCNT composite
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Farshbaf Zinati, Reza; Fattahi, A. M.
2018-05-01
In this study, electrical conductivity of high-density polyethylene (HDPE)/multi-walled carbon nanotube (MWCNT) composite was investigated after laser machining. To this end, produced using plastic injection process, nano-composite samples were laser machined with various combinations of input parameters such as feed rate (35, 45, and 55 mm/min), feed angle with injection flow direction (0°, 45°, and 90°), and MWCNT content (0.5, 1, and 1.5 wt%). The angle between laser feed and injected flow direction was set via either of two different methods: CNC programming and manual setting. The results showed that the parameters of angle between laser line and melt flow direction and feed rate were both found to have statistically significance and physical impacts on electrical resistance of the samples in manual setting. Also, maximum conductivity was seen when the angle between laser line and melt flow direction was set to 90° in manual setting, and maximum conductivity was seen at feed rate of 55 mm/min in both of CNC programming and manual setting.
NASA Astrophysics Data System (ADS)
Giangrande, S. E.; WANG, D.; Hardin, J. C.; Mitchell, J.
2017-12-01
As part of the 2 year Department of Energy Atmospheric Radiation Measurement (ARM) Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) campaign, the ARM Mobile Facility (AMF) collected a unique set of observations in a region of strong climatic significance near Manacapuru, Brazil. An important example for the beneficial observational record obtained by ARM during this campaign was that of the Radar Wind Profiler (RWP). This dataset has been previously documented for providing critical convective cloud vertical air velocity retrievals and precipitation properties (e.g., calibrated reflectivity factor Z, rainfall rates) under a wide variety of atmospheric conditions. Vertical air motion estimates to within deep convective cores such as those available from this RWP system have been previously identified as critical constraints for ongoing global climate modeling activities and deep convective cloud process studies. As an extended deployment within this `green ocean' region, the RWP site and collocated AMF surface gauge instrumentation experienced a unique hybrid of tropical and continental precipitation conditions, including multiple wet and dry season precipitation regimes, convective and organized stratiform storm dynamics and contributions to rainfall accumulation, pristine aerosol conditions of the locale, as well as the effects of the Manaus, Brazil, mega city pollution plume. For hydrological applications and potential ARM products, machine learning methods developed using this dataset are explored to demonstrate advantages in geophysical retrievals when compared to traditional methods. Emphasis is on performance improvements when providing additional information on storm structure and regime or echo type classifications. Since deep convective cloud dynamic insights (core updraft/downdraft properties) are difficult to obtain directly by conventional radars that also observe radar reflectivity factor profiles similar to RWP systems, we also consider possible machine learning applications to inform on (statistical) proxy convective relationships between observed convective core dynamics and radar microphysical properties that are otherwise not easily related by clear physical process paths using existing radar networks.
Automated inspection and precision grinding of spiral bevel gears
NASA Technical Reports Server (NTRS)
Frint, Harold
1987-01-01
The results are presented of a four phase MM&T program to define, develop, and evaluate an improved inspection system for spiral bevel gears. The improved method utilizes a multi-axis coordinate measuring machine which maps the working flank of the tooth and compares it to nominal reference values stored in the machine's computer. A unique feature of the system is that corrective grinding machine settings can be automatically calculated and printed out when necessary to correct an errant tooth profile. This new method eliminates most of the subjective decision making involved in the present method, which compares contact patterns obtained when the gear set is run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.
Enhanced automated spiral bevel gear inspection
NASA Technical Reports Server (NTRS)
Frint, Harold K.; Glasow, Warren
1992-01-01
Presented here are the results of a manufacturing and technology program to define, develop, and evaluate an enhanced inspection system for spiral bevel gears. The method uses a multi-axis coordinate measuring machine which maps the working surface of the tooth and compares it with nominal reference values stored in the machine's computer. The enhanced technique features a means for automatically calculating corrective grinding machine settings, involving both first and second order changes, to control the tooth profile to within specified tolerance limits. This enhanced method eliminates the subjective decision making involved in the tooth patterning method, still in use today, which compares contract patterns obtained when the gear is set to run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.
Machine Shop. Module 8: CNC (Computerized Numerical Control). Instructor's Guide.
ERIC Educational Resources Information Center
Crosswhite, Dwight
This document consists of materials for a five-unit course on the following topics: (1) safety guidelines; (2) coordinates and dimensions; (3) numerical control math; (4) programming for numerical control machines; and (5) setting and operating the numerical control machine. The instructor's guide begins with a list of competencies covered in the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iblisdir, S.; Gisin, N.; Acin, A.
We investigate the optimal distribution of quantum information over multipartite systems in asymmetric settings. We introduce cloning transformations that take N identical replicas of a pure state in any dimension as input and yield a collection of clones with nonidentical fidelities. As an example, if the clones are partitioned into a set of M{sub A} clones with fidelity F{sup A} and another set of M{sub B} clones with fidelity F{sup B}, the trade-off between these fidelities is analyzed, and particular cases of optimal N{yields}M{sub A}+M{sub B} cloning machines are exhibited. We also present an optimal 1{yields}1+1+1 cloning machine, which ismore » an example of a tripartite fully asymmetric cloner. Finally, it is shown how these cloning machines can be optically realized.« less
Machine learning for neuroimaging with scikit-learn.
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.
Machine learning for neuroimaging with scikit-learn
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388
Maker Cultures and the Prospects for Technological Action.
Nascimento, Susana; Pólvora, Alexandre
2018-06-01
Supported by easier and cheaper access to tools and expanding communities, maker cultures are pointing towards the ideas of (almost) everyone designing, creating, producing and distributing renewed, new and improved products, machines, things or artefacts. A careful analysis of the assumptions and challenges of maker cultures emphasizes the relevance of what may be called technological action, that is, active and critical interventions regarding the purposes and applications of technologies within ordinary lives, thus countering the deterministic trends of current directions of technology. In such transformative potential, we will explore a set of elements what is and could be technological action through snapshots of maker cultures based on the empirical research conducted in three particular contexts: the Fab Lab Network, Maker Media core outputs and initiatives such as Maker Faires, and the Open Source Hardware Association (OSHWA). Elements such as control and empowerment through material engagement, openness and sharing, and social, cultural, political and ethical values of the common good in topics such as diversity, sustainability and transparency, are critically analysed.
Machining of bone: Analysis of cutting force and surface roughness by turning process.
Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D
2015-11-01
There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.
Pocket-sized versus standard ultrasound machines in abdominal imaging.
Tse, K H; Luk, W H; Lam, M C
2014-06-01
The pocket-sized ultrasound machine has emerged as an invaluable tool for quick assessment in emergency and general practice settings. It is suitable for instant and quick assessment in cardiac imaging. However, its applicability in the imaging of other body parts has yet to be established. In this pictorial review, we compared the performance of the pocketsized ultrasound machine against the standard ultrasound machine for its image quality in common abdominal pathology.
Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.
Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean
2018-04-26
Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.
Wang, Yong; Chen, Xiang-Mei; Cai, Guang-Yan; Li, Wen-Ge; Zhang, Ai-Hua; Hao, Li-Rong; Shi, Ming; Wang, Rong; Jiang, Hong-Li; Luo, Hui-Min; Zhang, Dong; Sun, Xue-Feng
2017-08-02
To evaluate the in vivo and in vitro performance of a China-made dialysis machine (SWS-4000). This was a multi-center prospective controlled study consisting of both long-term in vitro evaluations and cross-over in vivo tests in 132 patients. The China-made SWS-4000 dialysis machine was compared with a German-made dialysis machine (Fresenius 4008) with regard to Kt/V values, URR values, and dialysis-related adverse reactions in patients on maintenance hemodialysis, as well as the ultrafiltration rate, the concentration of electrolytes in the proportioned dialysate, the rate of heparin injection, the flow rate of the blood pump, and the rate of malfunction. The Kt/V and URR values at the 1st and 4th weeks of dialysis as well as the incidence of adverse effects did not differ between the two groups in cross-over in vivo tests (P > 0.05). There were no significant differences between the two groups in the error values of the ultrafiltration rate, the rate of heparin injection or the concentrations of electrolytes in the proportioned dialysate at different time points under different parameter settings. At weeks 2 and 24, with the flow rate of the blood pump set at 300 mL/min, the actual error of the SWS-4000 dialysis machine was significantly higher than that of the Fresenius 4008 dialysis machine (P < 0.05), but there was no significant difference at other time points or under other settings (P > 0.05). The malfunction rate was higher in the SWS-4000 group than in the Fresenius 4008 group (P < 0.05). The in vivo performance of the SWS-4000 dialysis machine is roughly comparable to that of the Fresenius 4008 dialysis machine; however, the malfunction rate of the former is higher than that of the latter in in vitro tests. The stability and long-term accuracy of the SWS-4000 dialysis machine remain to be improved.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
1988-05-01
Shearing Machines WR/MMI DG 3446 Forging Machinery and Hammers WR/MMI DG 3447 Wire and Metal Ribbon Forming Machines WR/MMI DG 3448 Riveting Machines ...R/MN1I DG 3449 Miscellaneous Secondary Metal Forming & Cutting WR/MMI DG Machinery 3450 Machine Tools, Portable WR/MMI DG 3455 Cutting Tools for...Secondary Metalworking Machinery WR/MMI DG WR 3465 Production Jigs, Fixtures and Templates WR/MMI DG WR 3470 Machine Shop Sets, Kits, and Outfits WR/MMI DG
ERIC Educational Resources Information Center
Johnson, Christopher W.
1996-01-01
The development of safety-critical systems (aircraft cockpits and reactor control rooms) is qualitatively different from that of other interactive systems. These differences impose burdens on design teams that must ensure the development of human-machine interfaces. Analyzes strengths and weaknesses of formal methods for the design of user…
Cherry, Colin; Zhu, Xiaodan; Martin, Joel; de Bruijn, Berry
2013-01-01
An analysis of the timing of events is critical for a deeper understanding of the course of events within a patient record. The 2012 i2b2 NLP challenge focused on the extraction of temporal relationships between concepts within textual hospital discharge summaries. The team from the National Research Council Canada (NRC) submitted three system runs to the second track of the challenge: typifying the time-relationship between pre-annotated entities. The NRC system was designed around four specialist modules containing statistical machine learning classifiers. Each specialist targeted distinct sets of relationships: local relationships, 'sectime'-type relationships, non-local overlap-type relationships, and non-local causal relationships. The best NRC submission achieved a precision of 0.7499, a recall of 0.6431, and an F1 score of 0.6924, resulting in a statistical tie for first place. Post hoc improvements led to a precision of 0.7537, a recall of 0.6455, and an F1 score of 0.6954, giving the highest scores reported on this task to date. Methods for general relation extraction extended well to temporal relations, and gave top-ranked state-of-the-art results. Careful ordering of predictions within result sets proved critical to this success.
Application of TRIZ approach to machine vibration condition monitoring problems
NASA Astrophysics Data System (ADS)
Cempel, Czesław
2013-12-01
Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.
NASA Astrophysics Data System (ADS)
Amallynda, I.; Santosa, B.
2017-11-01
This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.
Information Processing Research.
1988-05-01
concentrated mainly on the Hitech chess machine, which achieves its success from parallelism in the right places. Hitech has now reached a National rating...includes local user workstations, a set of central server workstations each acting as a host for a Warp machine, and a few Warp multiprocessors. The... successful completion. A quorum for an operation is any such set of sites. Neces- sary and sufficient constraints on quorum intersections are derived
Defect Genome of Cubic Perovskites for Fuel Cell Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
Defect Genome of Cubic Perovskites for Fuel Cell Applications
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...
2017-10-10
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
An Investigation of Data Privacy and Utility Using Machine Learning as a Gauge
ERIC Educational Resources Information Center
Mivule, Kato
2014-01-01
The purpose of this investigation is to study and pursue a user-defined approach in preserving data privacy while maintaining an acceptable level of data utility using machine learning classification techniques as a gauge in the generation of synthetic data sets. This dissertation will deal with data privacy, data utility, machine learning…
Computer Programmed Milling Machine Operations. High-Technology Training Module.
ERIC Educational Resources Information Center
Leonard, Dennis
This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…
Machine Shop. Module 1: Machine Shop Orientation and Math. Instructor's Guide.
ERIC Educational Resources Information Center
Curtis, Donna; Nobles, Jack
This document consists of materials for a six-unit course on employment in the machine shop setting, safety, basic math skills, geometric figures and forms, math applications, and right triangles. The instructor's guide begins with a list of competencies covered in the module, descriptions of the materials included, an explanation of how to use…
Experimental Investigation – Magnetic Assisted Electro Discharge Machining
NASA Astrophysics Data System (ADS)
Kesava Reddy, Chirra; Manzoor Hussain, M.; Satyanarayana, S.; Krishna, M. V. S. Murali
2018-04-01
Emerging technology needs advanced machined parts with high strength and temperature resistance, high fatigue life at low production cost with good surface quality to fit into various industrial applications. Electro discharge machine is one of the extensively used machines to manufacture advanced machined parts which cannot be machined by other traditional machine with high precision and accuracy. Machining of DIN 17350-1.2080 (High Carbon High Chromium steel), using electro discharge machining has been discussed in this paper. In the present investigation an effort is made to use permanent magnet at various positions near the spark zone to improve surface quality of the machined surface. Taguchi methodology is used to obtain optimal choice for each machining parameter such as peak current, pulse duration, gap voltage and Servo reference voltage etc. Process parameters have significant influence on machining characteristics and surface finish. Improvement in surface finish is observed when process parameters are set at optimum condition under the influence of magnetic field at various positions.
The critical evaluation of stellar data
NASA Technical Reports Server (NTRS)
Underhill, A. B.; Mead, J. M.; Nagy, T. A.
1977-01-01
The paper discusses the importance of evaluating a catalog of stellar data, whether it is an old catalog being made available in machine-readable form, or a new catalog written expressly in machine-readable form, and discusses some principles to be followed in the evaluation of such data. A procedure to be followed when checking out an astronomical catalog on magnetic tape is described. A cross index system which relates the different identification numbers of a star or other astronomical object as they appear in different catalogs in machine-readable form is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, J.M.
The theory and methodology of design of general-purpose machines that may be controlled by a computer to perform all the tasks of a set of special-purpose machines is the focus of modern machine design research. These seventeen contributions chronicle recent activity in the analysis and design of robot manipulators that are the prototype of these general-purpose machines. They focus particularly on kinematics, the geometry of rigid-body motion, which is an integral part of machine design theory. The challenges to kinematics researchers presented by general-purpose machines such as the manipulator are leading to new perspectives in the design and control ofmore » simpler machines with two, three, and more degrees of freedom. Researchers are rethinking the uses of gear trains, planar mechanisms, adjustable mechanisms, and computer controlled actuators in the design of modern machines.« less
Calculations of safe collimator settings and β* at the CERN Large Hadron Collider
NASA Astrophysics Data System (ADS)
Bruce, R.; Assmann, R. W.; Redaelli, S.
2015-06-01
The first run of the Large Hadron Collider (LHC) at CERN was very successful and resulted in important physics discoveries. One way of increasing the luminosity in a collider, which gave a very significant contribution to the LHC performance in the first run and can be used even if the beam intensity cannot be increased, is to decrease the transverse beam size at the interaction points by reducing the optical function β*. However, when doing so, the beam becomes larger in the final focusing system, which could expose its aperture to beam losses. For the LHC, which is designed to store beams with a total energy of 362 MJ, this is critical, since the loss of even a small fraction of the beam could cause a magnet quench or even damage. Therefore, the machine aperture has to be protected by the collimation system. The settings of the collimators constrain the maximum beam size that can be tolerated and therefore impose a lower limit on β*. In this paper, we present calculations to determine safe collimator settings and the resulting limit on β*, based on available aperture and operational stability of the machine. Our model was used to determine the LHC configurations in 2011 and 2012 and it was found that β* could be decreased significantly compared to the conservative model used in 2010. The gain in luminosity resulting from the decreased margins between collimators was more than a factor 2, and a further contribution from the use of realistic aperture estimates based on measurements was almost as large. This has played an essential role in the rapid and successful accumulation of experimental data in the LHC.
Modal identification of spindle-tool unit in high-speed machining
NASA Astrophysics Data System (ADS)
Gagnol, Vincent; Le, Thien-Phu; Ray, Pascal
2011-10-01
The accurate knowledge of high-speed motorised spindle dynamic behaviour during machining is important in order to ensure the reliability of machine tools in service and the quality of machined parts. More specifically, the prediction of stable cutting regions, which is a critical requirement for high-speed milling operations, requires the accurate estimation of tool/holder/spindle set dynamic modal parameters. These estimations are generally obtained through Frequency Response Function (FRF) measurements of the non-rotating spindle. However, significant changes in modal parameters are expected to occur during operation, due to high-speed spindle rotation. The spindle's modal variations are highlighted through an integrated finite element model of the dynamic high-speed spindle-bearing system, taking into account rotor dynamics effects. The dependency of dynamic behaviour on speed range is then investigated and determined with accuracy. The objective of the proposed paper is to validate these numerical results through an experiment-based approach. Hence, an experimental setup is elaborated to measure rotating tool vibration during the machining operation in order to determine the spindle's modal frequency variation with respect to spindle speed in an industrial environment. The identification of natural frequencies of the spindle under rotating conditions is challenging, due to the low number of sensors and the presence of many harmonics in the measured signals. In order to overcome these issues and to extract the characteristics of the system, the spindle modes are determined through a 3-step procedure. First, spindle modes are highlighted using the Frequency Domain Decomposition (FDD) technique, with a new formulation at the considered rotating speed. These extracted modes are then analysed through the value of their respective damping ratios in order to separate the harmonics component from structural spindle natural frequencies. Finally, the stochastic properties of the modes are also investigated by considering the probability density of the retained modes. Results show a good correlation between numerical and experiment-based identified frequencies. The identified spindle-tool modal properties during machining allow the numerical model to be considered as representative of the real dynamic properties of the system.
Three-point compound sine plate offers cost and weight savings
NASA Technical Reports Server (NTRS)
Barras, A. P.
1972-01-01
Work piece adjustment fixture reduces size, weight and set-up complexity of alignment platforms used in metal blank machining. Design benefits designers and manufacturers of machine tools and measuring equipment.
Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics
Belo, David; Gamboa, Hugo
2017-01-01
The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components. PMID:28831239
B-machine polarimeter: A telescope to measure the polarization of the cosmic microwave background
NASA Astrophysics Data System (ADS)
Williams, Brian Dean
The B-Machine Telescope is the culmination of several years of development, construction, characterization and observation. The telescope is a departure from standard polarization chopping of correlation receivers to a half wave plate technique. Typical polarimeters use a correlation receiver to chop the polarization signal to overcome the 1/f noise inherent in HEMT amplifiers. B-Machine uses a room temperature half wave plate technology to chop between polarization states and measure the polarization signature of the CMB. The telescope has a demodulated 1/f knee of 5 mHz and an average sensitivity of 1.6 mK s . This document examines the construction, characterization, observation of astronomical sources, and data set analysis of B-Machine. Preliminary power spectra and sky maps with large sky coverage for the first year data set are included.
Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen
2018-01-10
Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.
The Body-Machine Interface: A new perspective on an old theme
Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465
Subcutaneous ICD screening with the Boston Scientific ZOOM programmer versus a 12-lead ECG machine.
Chang, Shu C; Patton, Kristen K; Robinson, Melissa R; Poole, Jeanne E; Prutkin, Jordan M
2018-02-24
The subcutaneous implantable cardioverter-defibrillator (S-ICD) requires preimplant screening to ensure appropriate sensing and reduce risk of inappropriate shocks. Screening can be performed using either an ICD programmer or a 12-lead electrocardiogram (ECG) machine. It is unclear whether differences in signal filtering and digital sampling change the screening success rate. Subjects were recruited if they had a transvenous single-lead ICD without pacing requirements or were candidates for a new ICD. Screening was performed using both a Boston Scientific ZOOM programmer (Marlborough, MA, USA) and General Electric MAC 5000 ECG machine (Fairfield, CT, USA). A pass was defined as having at least one lead that fit within the screening template in both supine and sitting positions. A total of 69 subjects were included and 27 sets of ECG leads had differing screening results between the two machines (7%). Of these sets, 22 (81%) passed using the ECG machine but failed using the programmer and five (19%) passed using the ECG machine but failed using the programmer (P < 0.001). Four subjects (6%) passed screening using the ECG machine but failed using the programmer. No subject passed screening with the programmer but failed with the ECG machine. There can be occasional disagreement in S-ICD patient screening between an ICD programmer and ECG machine, all of whom passed with the ECG machine but failed using the programmer. On a per lead basis, the ECG machine passes more subjects. It is unknown what the inappropriate shock rate would be if an S-ICD was implanted. Clinical judgment should be used in borderline cases. © 2018 Wiley Periodicals, Inc.
Topic categorisation of statements in suicide notes with integrated rules and machine learning.
Kovačević, Aleksandar; Dehghan, Azad; Keane, John A; Nenadic, Goran
2012-01-01
We describe and evaluate an automated approach used as part of the i2b2 2011 challenge to identify and categorise statements in suicide notes into one of 15 topics, including Love, Guilt, Thankfulness, Hopelessness and Instructions. The approach combines a set of lexico-syntactic rules with a set of models derived by machine learning from a training dataset. The machine learning models rely on named entities, lexical, lexico-semantic and presentation features, as well as the rules that are applicable to a given statement. On a testing set of 300 suicide notes, the approach showed the overall best micro F-measure of up to 53.36%. The best precision achieved was 67.17% when only rules are used, whereas best recall of 50.57% was with integrated rules and machine learning. While some topics (eg, Sorrow, Anger, Blame) prove challenging, the performance for relatively frequent (eg, Love) and well-scoped categories (eg, Thankfulness) was comparatively higher (precision between 68% and 79%), suggesting that automated text mining approaches can be effective in topic categorisation of suicide notes.
Micro-optical fabrication by ultraprecision diamond machining and precision molding
NASA Astrophysics Data System (ADS)
Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.
2017-06-01
Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.
Development of testing machine for tunnel inspection using multi-rotor UAV
NASA Astrophysics Data System (ADS)
Iwamoto, Tatsuya; Enaka, Tomoya; Tada, Keijirou
2017-05-01
Many concrete structures are deteriorating to dangerous levels throughout Japan. These concrete structures need to be inspected regularly to be sure that they are safe enough to be used. The inspection method for these concrete structures is typically the impact acoustic method. In the impact acoustic method, the worker taps the surface of the concrete with a hammer. Thus, it is necessary to set up scaffolding to access tunnel walls for inspection. Alternatively, aerial work platforms can be used. However, setting up scaffolding and aerial work platforms is not economical with regard to time or money. Therefore, we developed a testing machine using a multirotor UAV for tunnel inspection. This test machine flies by a plurality of rotors, and it is pushed along a concrete wall and moved by using rubber crawlers. The impact acoustic method is used in this testing machine. This testing machine has a hammer to make an impact, and a microphone to acquire the impact sound. The impact sound is converted into an electrical signal and is wirelessly transmitted to the computer. At the same time, the position of the testing machine is measured by image processing using a camera. The weight and dimensions of the testing machine are approximately 1.25 kg and 500 mm by 500 mm by 250 mm, respectively.
Quantitative assessment of the enamel machinability in tooth preparation with dental diamond burs.
Song, Xiao-Fei; Jin, Chen-Xin; Yin, Ling
2015-01-01
Enamel cutting using dental handpieces is a critical process in tooth preparation for dental restorations and treatment but the machinability of enamel is poorly understood. This paper reports on the first quantitative assessment of the enamel machinability using computer-assisted numerical control, high-speed data acquisition, and force sensing systems. The enamel machinability in terms of cutting forces, force ratio, cutting torque, cutting speed and specific cutting energy were characterized in relation to enamel surface orientation, specific material removal rate and diamond bur grit size. The results show that enamel surface orientation, specific material removal rate and diamond bur grit size critically affected the enamel cutting capability. Cutting buccal/lingual surfaces resulted in significantly higher tangential and normal forces, torques and specific energy (p<0.05) but lower cutting speeds than occlusal surfaces (p<0.05). Increasing material removal rate for high cutting efficiencies using coarse burs yielded remarkable rises in cutting forces and torque (p<0.05) but significant reductions in cutting speed and specific cutting energy (p<0.05). In particular, great variations in cutting forces, torques and specific energy were observed at the specific material removal rate of 3mm(3)/min/mm using coarse burs, indicating the cutting limit. This work provides fundamental data and the scientific understanding of the enamel machinability for clinical dental practice. Copyright © 2014 Elsevier Ltd. All rights reserved.
Causal inference in economics and marketing.
Varian, Hal R
2016-07-05
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.
ERIC Educational Resources Information Center
Zhang, Mo; Chen, Jing; Ruan, Chunyi
2016-01-01
Successful detection of unusual responses is critical for using machine scoring in the assessment context. This study evaluated the utility of approaches to detecting unusual responses in automated essay scoring. Two research questions were pursued. One question concerned the performance of various prescreening advisory flags, and the other…
Causal inference in economics and marketing
Varian, Hal R.
2016-01-01
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual—a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference. PMID:27382144
NASA Astrophysics Data System (ADS)
Omega, Dousmaris; Andika, Aditya
2017-12-01
This paper discusses the results of a research conducted on the production process of an Indonesian pharmaceutical company. The company is experiencing low performance in the Overall Equipment Effectiveness (OEE) metric. The OEE of the company machines are below world class standard. The machine that has the lowest OEE is the filler machine. Through observation and analysis, it is found that the cleaning process of the filler machine consumes significant amount of time. The long duration of the cleaning process happens because there is no structured division of jobs between cleaning operators, differences in operators’ ability, and operators’ inability in utilizing available cleaning equipment. The company needs to improve the cleaning process. Therefore, Critical Path Method (CPM) analysis is conducted to find out what activities are critical in order to shorten and simplify the cleaning process in the division of tasks. Afterwards, The Maynard Operation and Sequence Technique (MOST) method is used to reduce ineffective movement and specify the cleaning process standard time. From CPM and MOST, it is obtained the shortest time of the cleaning process is 1 hour 28 minutes and the standard time is 1 hour 38.826 minutes.
Machine-Learning Algorithms to Code Public Health Spending Accounts
Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David
2017-01-01
Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation. PMID:28363034
Machine-Learning Algorithms to Code Public Health Spending Accounts.
Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David
Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.
An experimental result of estimating an application volume by machine learning techniques.
Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko
2015-01-01
In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.
Electrical contact tool set station
Byers, M.E.
1988-02-22
An apparatus is provided for the precise setting to zero of electrically conductive cutting tools used in the machining of work pieces. An electrically conductive cylindrical pin, tapered at one end to a small flat, rests in a vee-shaped channel in a base so that its longitudinal axis is parallel to the longitudinal axis of the machine's spindle. Electronic apparatus is connected between the cylindrical pin and the electrically conductive cutting tool to produce a detectable signal when contact between tool and pin is made. The axes of the machine are set to zero by contact between the cutting tool and the sides, end or top of the cylindrical pin. Upon contact, an electrical circuit is completed, and the detectable signal is produced. The tool can then be set to zero for that axis. Should the tool contact the cylindrical pin with too much force, the cylindrical pin would be harmlessly dislodged from the vee-shaped channel, preventing damage either to the cutting tool or the cylindrical pin. 5 figs.
Spin dynamics modeling in the AGS based on a stepwise ray-tracing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutheil, Yann
The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less
Machine learning for the automatic detection of anomalous events
NASA Astrophysics Data System (ADS)
Fisher, Wendy D.
In this dissertation, we describe our research contributions for a novel approach to the application of machine learning for the automatic detection of anomalous events. We work in two different domains to ensure a robust data-driven workflow that could be generalized for monitoring other systems. Specifically, in our first domain, we begin with the identification of internal erosion events in earth dams and levees (EDLs) using geophysical data collected from sensors located on the surface of the levee. As EDLs across the globe reach the end of their design lives, effectively monitoring their structural integrity is of critical importance. The second domain of interest is related to mobile telecommunications, where we investigate a system for automatically detecting non-commercial base station routers (BSRs) operating in protected frequency space. The presence of non-commercial BSRs can disrupt the connectivity of end users, cause service issues for the commercial providers, and introduce significant security concerns. We provide our motivation, experimentation, and results from investigating a generalized novel data-driven workflow using several machine learning techniques. In Chapter 2, we present results from our performance study that uses popular unsupervised clustering algorithms to gain insights to our real-world problems, and evaluate our results using internal and external validation techniques. Using EDL passive seismic data from an experimental laboratory earth embankment, results consistently show a clear separation of events from non-events in four of the five clustering algorithms applied. Chapter 3 uses a multivariate Gaussian machine learning model to identify anomalies in our experimental data sets. For the EDL work, we used experimental data from two different laboratory earth embankments. Additionally, we explore five wavelet transform methods for signal denoising. The best performance is achieved with the Haar wavelets. We achieve up to 97.3% overall accuracy and less than 1.4% false negatives in anomaly detection. In Chapter 4, we research using two-class and one-class support vector machines (SVMs) for an effective anomaly detection system. We again use the two different EDL data sets from experimental laboratory earth embankments (each having approximately 80% normal and 20% anomalies) to ensure our workflow is robust enough to work with multiple data sets and different types of anomalous events (e.g., cracks and piping). We apply Haar wavelet-denoising techniques and extract nine spectral features from decomposed segments of the time series data. The two-class SVM with 10-fold cross validation achieved over 94% overall accuracy and 96% F1-score. Our approach provides a means for automatically identifying anomalous events using various machine learning techniques. Detecting internal erosion events in aging EDLs, earlier than is currently possible, can allow more time to prevent or mitigate catastrophic failures. Results show that we can successfully separate normal from anomalous data observations in passive seismic data, and provide a step towards techniques for continuous real-time monitoring of EDL health. Our lightweight non-commercial BSR detection system also has promise in separating commercial from non-commercial BSR scans without the need for prior geographic location information, extensive time-lapse surveys, or a database of known commercial carriers. (Abstract shortened by ProQuest.).
The Integration of an API619 Screw Compressor Package into the Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Milligan, W. J.; Poli, G.; Harrison, D. K.
2017-08-01
The Industrial Internet of Things (IIoT) is the industrial subset of the Internet of Things (IoT). IIoT incorporates big data technology, harnessing the instrumentation data, machine to machine communication and automation technologies that have existed in industrial settings for years. As industry in general trends towards the IIoT and as the screw compressor packages developed by Howden Compressors are designed with a minimum design life of 25 years, it is imperative this technology is embedded immediately. This paper provides the reader with a description on the Industrial Internet of Things before moving onto describing the scope of the problem for an organisation like Howden Compressors who deploy multiple compressor technologies across multiple locations and focuses on the critical measurements particular to high specification screw compressor packages. A brief analysis of how this differs from high volume package manufacturers deploying similar systems is offered. Then follows a description on how the measured information gets from the tip of the instrument in the process pipework or drive train through the different layers, with a description of each layer, into the final presentation layer. The functions available within the presentation layer are taken in turn and the benefits analysed with specific focus on efficiency and availability. The paper concludes with how packagers adopting the IIoT can not only optimise their package but by utilising the machine learning technology and pattern detection applications can adopt completely new business models.
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik
2017-01-01
Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Condition monitoring of distributed systems using two-stage Bayesian inference data fusion
NASA Astrophysics Data System (ADS)
Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł
2017-03-01
In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of each individual element, as well as the entire piece of machinery.
On the use of feature selection to improve the detection of sea oil spills in SAR images
NASA Astrophysics Data System (ADS)
Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo
2017-03-01
Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
The Machine Tool Advanced Skills Technology (MAST) consortium was formed to address the shortage of skilled workers for the machine tools and metals-related industries. Featuring six of the nation's leading advanced technology centers, the MAST consortium developed, tested, and disseminated industry-specific skill standards and model curricula for…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This volume developed by the Machine Tool Advanced Skill Technology (MAST) program contains key administrative documents and provides additional sources for machine tool and precision manufacturing information and important points of contact in the industry. The document contains the following sections: a foreword; grant award letter; timeline for…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational speciality areas within the U.S. machine tool and metals-related…
Embedded control system for computerized franking machine
NASA Astrophysics Data System (ADS)
Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.
2007-12-01
This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.
Prototype Automated Equipment to Perform Poising and Beat Rate Operations on the M577 MTSQ Fuze.
1978-09-30
Regulation Machine which sets the M577 Fuze Timer beat rate and the Automatic Poising Machine which J dynamically balances the Timer balance wheel...in trouble shooting., The Automatic Poising Machine Figure 3 which inspects and corrects the dynamic I balance of the Balance Wheel Assembly was...machine is intimately related to the fastening method of the wire to the Timer at one end and the Balance Wheel at the other, a review of the history
Learning Activity Packets for Milling Machines. Unit II--Horizontal Milling Machines.
ERIC Educational Resources Information Center
Oklahoma State Board of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This learning activity packet (LAP) outlines the study activities and performance tasks covered in a related curriculum guide on milling machines. The course of study in this LAP is intended to help students learn to set up and operate a horizontal mill. Tasks addressed in the LAP include mounting style "A" or "B" arbors and adjusting arbor…
Phacoemulsification tip vacuum pressure: Comparison of 4 devices.
Payne, Marielle; Georgescu, Dan; Waite, Aaron N; Olson, Randall J
2006-08-01
To determine the vacuum pressure generated by 4 phacoemulsification devices measured at the phacoemulsification tip. University ophthalmology department. The effective vacuum pressures generated by the Sovereign (AMO), Millennium (Bausch & Lomb), Legacy AdvanTec (Alcon Laboratories), and Infiniti (Alcon Laboratories) phacoemulsification machines were measured with a device that isolated the phacoemulsification tip in a chamber connected to a pressure gauge. The 4 machines were tested at multiple vacuum limit settings, and the values were recorded after the foot pedal was fully depressed and the pressure had stabilized. The AdvanTec and Infiniti machines were tested with and without occlusion of the Aspiration Bypass System (ABS) side port (Alcon Laboratories). The Millennium machine was tested using venturi and peristaltic pumps. The machines generated pressures close to the expected at maximum vacuum settings between 100 mm Hg and 500 mm Hg with few intermachine variations. There was no significant difference between pressures generated using 19- or 20-gauge tips (Millennium and Sovereign). The addition of an ABS side port decreased vacuum by a mean of 12.1% (P < .0001). Although there were some variations in vacuum pressures among phacoemulsification machines, particularly when an aspiration bypass tip was used, these discrepancies are probably not clinically significant.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
AFM surface imaging of AISI D2 tool steel machined by the EDM process
NASA Astrophysics Data System (ADS)
Guu, Y. H.
2005-04-01
The surface morphology, surface roughness and micro-crack of AISI D2 tool steel machined by the electrical discharge machining (EDM) process were analyzed by means of the atomic force microscopy (AFM) technique. Experimental results indicate that the surface texture after EDM is determined by the discharge energy during processing. An excellent machined finish can be obtained by setting the machine parameters at a low pulse energy. The surface roughness and the depth of the micro-cracks were proportional to the power input. Furthermore, the AFM application yielded information about the depth of the micro-cracks is particularly important in the post treatment of AISI D2 tool steel machined by EDM.
Nässelqvist, Mattias; Gustavsson, Rolf; Aidanpää, Jan-Olov
2013-07-01
It is important to monitor the radial loads in hydropower units in order to protect the machine from harmful radial loads. Existing recommendations in the standards regarding the radial movements of the shaft and bearing housing in hydropower units, ISO-7919-5 (International Organization for Standardization, 2005, "ISO 7919-5: Mechanical Vibration-Evaluation of Machine Vibration by Measurements on Rotating Shafts-Part 5: Machine Sets in Hydraulic Power Generating and Pumping Plants," Geneva, Switzerland) and ISO-10816-5 (International Organization for Standardization, 2000, "ISO 10816-5: Mechanical Vibration-Evaluation of Machine Vibration by Measurements on Non-Rotating Parts-Part 5: Machine Sets in Hydraulic Power Generating and Pumping Plants," Geneva, Switzerland), have alarm levels based on statistical data and do not consider the mechanical properties of the machine. The synchronous speed of the unit determines the maximum recommended shaft displacement and housing acceleration, according to these standards. This paper presents a methodology for the alarm and trip levels based on the design criteria of the hydropower unit and the measured radial loads in the machine during operation. When a hydropower unit is designed, one of its design criteria is to withstand certain loads spectra without the occurrence of fatigue in the mechanical components. These calculated limits for fatigue are used to set limits for the maximum radial loads allowed in the machine before it shuts down in order to protect itself from damage due to high radial loads. Radial loads in hydropower units are caused by unbalance, shape deviations, dynamic flow properties in the turbine, etc. Standards exist for balancing and manufacturers (and power plant owners) have recommendations for maximum allowed shape deviations in generators. These standards and recommendations determine which loads, at a maximum, should be allowed before an alarm is sent that the machine needs maintenance. The radial bearing load can be determined using load cells, bearing properties multiplied by shaft displacement, or bearing bracket stiffness multiplied by housing compression or movement. Different load measurement methods should be used depending on the design of the machine and accuracy demands in the load measurement. The methodology presented in the paper is applied to a 40 MW hydropower unit; suggestions are presented for the alarm and trip levels for the machine based on the mechanical properties and radial loads.
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.
Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong
2016-06-01
Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bayesian kernel machine regression for estimating the health effects of multi-pollutant mixtures.
Bobb, Jennifer F; Valeri, Linda; Claus Henn, Birgit; Christiani, David C; Wright, Robert O; Mazumdar, Maitreyi; Godleski, John J; Coull, Brent A
2015-07-01
Because humans are invariably exposed to complex chemical mixtures, estimating the health effects of multi-pollutant exposures is of critical concern in environmental epidemiology, and to regulatory agencies such as the U.S. Environmental Protection Agency. However, most health effects studies focus on single agents or consider simple two-way interaction models, in part because we lack the statistical methodology to more realistically capture the complexity of mixed exposures. We introduce Bayesian kernel machine regression (BKMR) as a new approach to study mixtures, in which the health outcome is regressed on a flexible function of the mixture (e.g. air pollution or toxic waste) components that is specified using a kernel function. In high-dimensional settings, a novel hierarchical variable selection approach is incorporated to identify important mixture components and account for the correlated structure of the mixture. Simulation studies demonstrate the success of BKMR in estimating the exposure-response function and in identifying the individual components of the mixture responsible for health effects. We demonstrate the features of the method through epidemiology and toxicology applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wang, Kun; Bhandari, Vineet; Giuliano, John S.; O′Hern, Corey S.; Shattuck, Mark D.; Kirby, Michael
2014-01-01
Severe pediatric sepsis continues to be associated with high mortality rates in children. Thus, an important area of biomedical research is to identify biomarkers that can classify sepsis severity and outcomes. The complex and heterogeneous nature of sepsis makes the prospect of the classification of sepsis severity using a single biomarker less likely. Instead, we employ machine learning techniques to validate the use of a multiple biomarkers scoring system to determine the severity of sepsis in critically ill children. The study was based on clinical data and plasma samples provided by a tertiary care center's Pediatric Intensive Care Unit (PICU) from a group of 45 patients with varying sepsis severity at the time of admission. Canonical Correlation Analysis with the Forward Selection and Random Forests methods identified a particular set of biomarkers that included Angiopoietin-1 (Ang-1), Angiopoietin-2 (Ang-2), and Bicarbonate (HCO) as having the strongest correlations with sepsis severity. The robustness and effectiveness of these biomarkers for classifying sepsis severity were validated by constructing a linear Support Vector Machine diagnostic classifier. We also show that the concentrations of Ang-1, Ang-2, and HCO enable predictions of the time dependence of sepsis severity in children. PMID:25255212
Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Maltach, E. G.
1969-01-01
The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.
Polygraphs: erosion of the privacy right.
Nemeth, C P
1983-01-01
The polygraph is a machine which invades previously private regions in the human being. Its operation is often viewed simplistically and lacking in danger. Such naivete is the subject of this comment. The paper considers the mechanics of polygraph operation, and its theoretical basis; the legal admissibility of the polygraph in a variety of settings, and lastly the impact the polygraph has upon our private lives. Clearly, the polygraph intrudes on the private regions of each individual, and this frightening fact is cause enough to consider the human, social and constitutional implications of its use. The project considers the reliability factor of the polygraph and its questionable use in personnel and business settings; its use in disciplinary procedures and labor arbitration, as well as reviews its place in judicial process and criminal review. Most critically the paper attempts to arrive at a constitutional basis for restrictions on its use in the private sector. Ingenious arguments have been made by opponents of the polygraph, and this paper reviews the substance and content of these constitutional arguments.
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris
2016-01-01
Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587
NASA Astrophysics Data System (ADS)
Prokhorov, Sergey
2017-10-01
Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.
Modeling of solid-state and excimer laser processes for 3D micromachining
NASA Astrophysics Data System (ADS)
Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.
2005-04-01
An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.
Machine learning methods in chemoinformatics
Mitchell, John B O
2014-01-01
Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure–activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers. WIREs Comput Mol Sci 2014, 4:468–481. How to cite this article: WIREs Comput Mol Sci 2014, 4:468–481. doi:10.1002/wcms.1183 PMID:25285160
Maeda, Hotaka; Quartiroli, Alessandro; Vos, Paul W; Carr, Lucas J; Mahar, Matthew T
2014-05-01
Libraries are an inherently sedentary environment, but are an understudied setting for sedentary behavior interventions. To investigate the feasibility of incorporating portable pedal machines in a university library to reduce sedentary behaviors. The 11-week intervention targeted students at a university library. Thirteen portable pedal machines were placed in the library. Four forms of prompts (e-mail, library website, advertisement monitors, and poster) encouraging pedal machine use were employed during the first 4 weeks. Pedal machine use was measured via automatic timers on each machine and momentary time sampling. Daily library visits were measured using a gate counter. Individualized data were measured by survey. Data were collected in fall 2012 and analyzed in 2013. Mean (SD) cumulative pedal time per day was 95.5 (66.1) minutes. One or more pedal machines were observed being used 15% of the time (N=589). Pedal machines were used at least once by 7% of students (n=527). Controlled for gate count, no linear change of pedal machine use across days was found (b=-0.1 minutes, p=0.75) and the presence of the prompts did not change daily pedal time (p=0.63). Seven of eight items that assessed attitudes toward the intervention supported intervention feasibility (p<0.05). The unique non-individualized approach of retrofitting a library with pedal machines to reduce sedentary behavior seems feasible, but improvement of its effectiveness is needed. This study could inform future studies aimed at reshaping traditionally sedentary settings to improve public health. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Modeling Electronic Quantum Transport with Machine Learning
Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.
2014-06-11
We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less
Hanlon, John A.; Gill, Timothy J.
2001-01-01
Machine tools can be accurately measured and positioned on manufacturing machines within very small tolerances by use of an autocollimator on a 3-axis mount on a manufacturing machine and positioned so as to focus on a reference tooling ball or a machine tool, a digital camera connected to the viewing end of the autocollimator, and a marker and measure generator for receiving digital images from the camera, then displaying or measuring distances between the projection reticle and the reference reticle on the monitoring screen, and relating the distances to the actual position of the autocollimator relative to the reference tooling ball. The images and measurements are used to set the position of the machine tool and to measure the size and shape of the machine tool tip, and examine cutting edge wear. patent
1980-05-31
34 International Journal of Man- Machine Studies , Vol. 9, No. 1, 1977, pp. 1-68. [16] Zimmermann, H. J., Theory and Applications of Fuzzy Sets, Institut...Boston, Inc., Hingham, MA, 1978. [18] Yager, R. R., "Multiple Objective Decision-Making Using Fuzzy Sets," International Journal of Man- Machine Studies ...Professor of Industria Engineering ... iv t TABLE OF CONTENTS page ABSTRACT .. .. . ...... . .... ...... ........ iii LIST OF TABLES
Alternator control for battery charging
Brunstetter, Craig A.; Jaye, John R.; Tallarek, Glen E.; Adams, Joseph B.
2015-07-14
In accordance with an aspect of the present disclosure, an electrical system for an automotive vehicle has an electrical generating machine and a battery. A set point voltage, which sets an output voltage of the electrical generating machine, is set by an electronic control unit (ECU). The ECU selects one of a plurality of control modes for controlling the alternator based on an operating state of the vehicle as determined from vehicle operating parameters. The ECU selects a range for the set point voltage based on the selected control mode and then sets the set point voltage within the range based on feedback parameters for that control mode. In an aspect, the control modes include a trickle charge mode and battery charge current is the feedback parameter and the ECU controls the set point voltage within the range to maintain a predetermined battery charge current.
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Method and apparatus for calibrating multi-axis load cells in a dexterous robot
NASA Technical Reports Server (NTRS)
Wampler, II, Charles W. (Inventor); Platt, Jr., Robert J. (Inventor)
2012-01-01
A robotic system includes a dexterous robot having robotic joints, angle sensors adapted for measuring joint angles at a corresponding one of the joints, load cells for measuring a set of strain values imparted to a corresponding one of the load cells during a predetermined pose of the robot, and a host machine. The host machine is electrically connected to the load cells and angle sensors, and receives the joint angle values and strain values during the predetermined pose. The robot presses together mating pairs of load cells to form the poses. The host machine executes an algorithm to process the joint angles and strain values, and from the set of all calibration matrices that minimize error in force balance equations, selects the set of calibration matrices that is closest in a value to a pre-specified value. A method for calibrating the load cells via the algorithm is also provided.
Torque shudder protection device and method
King, Robert D.; De Doncker, Rik W. A. A.; Szczesny, Paul M.
1997-01-01
A torque shudder protection device for an induction machine includes a flux command generator for supplying a steady state flux command and a torque shudder detector for supplying a status including a negative status to indicate a lack of torque shudder and a positive status to indicate a presence of torque shudder. A flux adapter uses the steady state flux command and the status to supply a present flux command identical to the steady state flux command for a negative status and different from the steady state flux command for a positive status. A limiter can receive the present flux command, prevent the present flux command from exceeding a predetermined maximum flux command magnitude, and supply the present flux command to a field oriented controller. After determining a critical electrical excitation frequency at which a torque shudder occurs for the induction machine, a flux adjuster can monitor the electrical excitation frequency of the induction machine and adjust a flux command to prevent the monitored electrical excitation frequency from reaching the critical electrical excitation frequency.
Torque shudder protection device and method
King, R.D.; Doncker, R.W.A.A. De.; Szczesny, P.M.
1997-03-11
A torque shudder protection device for an induction machine includes a flux command generator for supplying a steady state flux command and a torque shudder detector for supplying a status including a negative status to indicate a lack of torque shudder and a positive status to indicate a presence of torque shudder. A flux adapter uses the steady state flux command and the status to supply a present flux command identical to the steady state flux command for a negative status and different from the steady state flux command for a positive status. A limiter can receive the present flux command, prevent the present flux command from exceeding a predetermined maximum flux command magnitude, and supply the present flux command to a field oriented controller. After determining a critical electrical excitation frequency at which a torque shudder occurs for the induction machine, a flux adjuster can monitor the electrical excitation frequency of the induction machine and adjust a flux command to prevent the monitored electrical excitation frequency from reaching the critical electrical excitation frequency. 5 figs.
[Philosophical perspectives about the use of technology in critical care nursing].
Schwonke, Camila Rose G Barcelos; Lunardi Filho, Wilson Danilo; Lunardi, Valéria Lerch; Santos, Silvana Sidney Costa; Barlem, Edison Luiz Devos
2011-01-01
The context of health assistance has been influenced by changes which are produced in the dimension of technology. They have triggered a lot of inquietude and questioning regarding benefits, risks, and relations constructed among workers and sick, and the use of machines as indispensable tools for care. This paper aims at reflecting on the use of technology in nursing care given to the critical sick in the intensive care unit. It is expected that this reflection can minimize doubts which permeate technological environments, such as the intensive care unit, and the conceptions of nursing care since it involves the use of machines and equipment which provide advanced life support in this field of health assistance.
Status of research and development in coordinate-measurement technology
NASA Astrophysics Data System (ADS)
Dich, L. Z.; Latyev, S. M.
1994-09-01
This paper discusses problems involved in developing and operating coordinate-measuring machines. The status of this area of precision instrumentation is analyzed. These problems are made critical not only by the requirements of the machine-tool industry but also by those of the microelectronics industry, both of which use coordinate tables, step-up gears, and other equipment in which precise coordinate measurements are necessary.
Machine learning of fault characteristics from rocket engine simulation data
NASA Technical Reports Server (NTRS)
Ke, Min; Ali, Moonis
1990-01-01
Transformation of data into knowledge through conceptual induction has been the focus of our research described in this paper. We have developed a Machine Learning System (MLS) to analyze the rocket engine simulation data. MLS can provide to its users fault analysis, characteristics, and conceptual descriptions of faults, and the relationships of attributes and sensors. All the results are critically important in identifying faults.
Yoo, Tae Keun; Kim, Sung Kean; Kim, Deok Won; Choi, Joon Yul; Lee, Wan Hyung; Oh, Ein; Park, Eun-Cheol
2013-11-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women compared to the ability of conventional clinical decision tools. We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Examination Surveys. The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests, artificial neural networks (ANN), and logistic regression (LR) based on simple surveys. The machine learning models were compared to four conventional clinical decision tools: osteoporosis self-assessment tool (OST), osteoporosis risk assessment instrument (ORAI), simple calculated osteoporosis risk estimation (SCORE), and osteoporosis index of risk (OSIRIS). SVM had significantly better area under the curve (AUC) of the receiver operating characteristic than ANN, LR, OST, ORAI, SCORE, and OSIRIS for the training set. SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0% at total hip, femoral neck, or lumbar spine for the testing set. The significant factors selected by SVM were age, height, weight, body mass index, duration of menopause, duration of breast feeding, estrogen therapy, hyperlipidemia, hypertension, osteoarthritis, and diabetes mellitus. Considering various predictors associated with low bone density, the machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Cazzola, D; Stone, B; Holsgrove, T P; Trewartha, G; Preatoni, E
2016-04-01
Biomechanical studies of rugby union scrummaging have focused on kinetic and kinematic analyses, while muscle activation strategies employed by front-row players during scrummaging are still unknown. The aim of the current study was to investigate the activity of spinal muscles during machine and live scrums. Nine male front-row forwards scrummaged as individuals against a scrum machine under "crouch-touch-set" and "crouch-bind-set" conditions, and against a two-player opposition in a simulated live condition. Muscle activities of the sternocleidomastoid, upper trapezius, and erector spinae were measured over the pre-engagement, engagement, and sustained-push phases. The "crouch-bind-set" condition increased muscle activity of the upper trapezius and sternocleidomastoid before and during the engagement phase in machine scrummaging. During the sustained-push phase, live scrummaging generated higher activities of the erector spinae than either machine conditions. These results suggest that the pre-bind, prior to engagement, may effectively prepare the cervical spine by stiffening joints before the impact phase. Additionally, machine scrummaging does not replicate the muscular demands of live scrummaging for the erector spinae, and for this reason, we advise rugby union forwards to ensure scrummaging is practiced in live situations to improve the specificity of their neuromuscular activation strategies in relation to resisting external loads. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Department of Defense Logistics Roadmap 2008. Volume 1
2008-07-01
machine readable identification mark on the Department’s tangible qualifying assets, and establishes the data management protocols needed to...uniquely identify items with a Unique Item Identifier (UII) via machine - readable information (MRI) marking represented by a two-dimensional data...property items with a machine -readable Unique Item Identifier (UII), which is a set of globally unique data elements. The UII is used in functional
Materialism and the Mediating Third
ERIC Educational Resources Information Center
Bradley, Joff
2012-01-01
This article proffers a critical reading of multiliteracy pedagogy and a materialism of the multimodal and machinic. A critical stance is taken against the mesmerising modes of representation that run rampant across our ocular territories. The article assesses the dangers of fetishizing technologies. To this end, Multiple Literacies Theory (MLT)…
Why Machine-Information Metaphors are Bad for Science and Science Education
NASA Astrophysics Data System (ADS)
Pigliucci, Massimo; Boudry, Maarten
2011-05-01
Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of "blueprints" for the construction of organisms. Likewise, cells are often characterized as "factories" and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Importantly, modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists' use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as "irreducible complexity" and on flawed analogies between living cells and mechanical factories. However, the living organism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume's criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume's original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.
Prediction of aquatic toxicity mode of action using linear discriminant and random forest models.
Martin, Todd M; Grulke, Christopher M; Young, Douglas M; Russom, Christine L; Wang, Nina Y; Jackson, Crystal R; Barron, Mace G
2013-09-23
The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.
LHC Status and Upgrade Challenges
NASA Astrophysics Data System (ADS)
Smith, Jeffrey
2009-11-01
The Large Hadron Collider has had a trying start-up and a challenging operational future lays ahead. Critical to the machine's performance is controlling a beam of particles whose stored energy is equivalent to 80 kg of TNT. Unavoidable beam losses result in energy deposition throughout the machine and without adequate protection this power would result in quenching of the superconducting magnets. A brief overview of the machine layout and principles of operation will be reviewed including a summary of the September 2008 accident. The current status of the LHC, startup schedule and upgrade options to achieve the target luminosity will be presented.
Radiation tolerant combinational logic cell
NASA Technical Reports Server (NTRS)
Maki, Gary R. (Inventor); Whitaker, Sterling (Inventor); Gambles, Jody W. (Inventor)
2009-01-01
A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.
NASA Astrophysics Data System (ADS)
Lucian, P.; Gheorghe, S.
2017-08-01
This paper presents a new method, based on FRISCO formula, for optimizing the choice of the best control system for kinematical feed chains with great distance between slides used in computer numerical controlled machine tools. Such machines are usually, but not limited to, used for machining large and complex parts (mostly in the aviation industry) or complex casting molds. For such machine tools the kinematic feed chains are arranged in a dual-parallel drive structure that allows the mobile element to be moved by the two kinematical branches and their related control systems. Such an arrangement allows for high speed and high rigidity (a critical requirement for precision machining) during the machining process. A significant issue for such an arrangement it’s the ability of the two parallel control systems to follow the same trajectory accurately in order to address this issue it is necessary to achieve synchronous motion control for the two kinematical branches ensuring that the correct perpendicular position it’s kept by the mobile element during its motion on the two slides.
Human factors model concerning the man-machine interface of mining crewstations
NASA Technical Reports Server (NTRS)
Rider, James P.; Unger, Richard L.
1989-01-01
The U.S. Bureau of Mines is developing a computer model to analyze the human factors aspect of mining machine operator compartments. The model will be used as a research tool and as a design aid. It will have the capability to perform the following: simulated anthropometric or reach assessment, visibility analysis, illumination analysis, structural analysis of the protective canopy, operator fatigue analysis, and computation of an ingress-egress rating. The model will make extensive use of graphics to simplify data input and output. Two dimensional orthographic projections of the machine and its operator compartment are digitized and the data rebuilt into a three dimensional representation of the mining machine. Anthropometric data from either an individual or any size population may be used. The model is intended for use by equipment manufacturers and mining companies during initial design work on new machines. In addition to its use in machine design, the model should prove helpful as an accident investigation tool and for determining the effects of machine modifications made in the field on the critical areas of visibility and control reach ability.
An intelligent CNC machine control system architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.J.; Loucks, C.S.
1996-10-01
Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications usingmore » platform-independent software.« less
Modification of Upper Thread Tensioner of Sewing Machine
NASA Astrophysics Data System (ADS)
Klouček, P.; Škop, P.
Standard mechanical upper thread tensioner of sewing machines is more and more limited in use for industrial sewing machines due to increasing requests for quality and raising velocity of machines. If we omit mostly manual settings of force made only by sense, the most problematic things are influence of different friction coefficient of the different batch of threads and strong relation between thread tension and sewing machine velocity. The article describes the development focused to the elimination of the most significant disadvantages of a standard tensioner and mainly finding of new conception of the tensioner with electromagnetic brake, development and testing of its prototype.
Comparison between extreme learning machine and wavelet neural networks in data classification
NASA Astrophysics Data System (ADS)
Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri
2017-03-01
Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.
Electronic vending machines for dispensing rapid HIV self-testing kits: a case study.
Young, Sean D; Klausner, Jeffrey; Fynn, Risa; Bolan, Robert
2014-02-01
This short report evaluates the feasibility of using electronic vending machines for dispensing oral, fluid, rapid HIV self-testing kits in Los Angeles County. Feasibility criteria that needed to be addressed were defined as: (1) ability to find a manufacturer who would allow dispensing of HIV testing kits and could fit them to the dimensions of a vending machine, (2) ability to identify and address potential initial obstacles, trade-offs in choosing a machine location, and (3) ability to gain community approval for implementing this approach in a community setting. To address these issues, we contracted a vending machine company who could supply a customized, Internet-enabled machine that could dispense HIV kits and partnered with a local health center available to host the machine onsite and provide counseling to participants, if needed. Vending machines appear to be feasible technologies that can be used to distribute HIV testing kits.
Electronic vending machines for dispensing rapid HIV self-testing kits: A case study
Young, Sean D.; Klausner, Jeffrey; Fynn, Risa; Bolan, Robert
2014-01-01
This short report evaluates the feasibility of using electronic vending machines for dispensing oral, fluid, rapid HIV-self testing kits in Los Angeles County. Feasibility criteria that needed to be addressed were defined as: 1) ability to find a manufacturer who would allow dispensing of HIV testing kits and could fit them to the dimensions of a vending machine, 2) ability to identify and address potential initial obstacles, trade-offs in choosing a machine location, and 3) ability to gain community approval for implementing this approach in a community setting. To address these issues, we contracted a vending machine company who could supply a customized, Internet-enabled machine that could dispense HIV kits and partnered with a local health center available to host the machine onsite and provide counseling to participants, if needed. Vending machines appear to be feasible technologies that can be used to distribute HIV testing kits. PMID:23777528
NASA Astrophysics Data System (ADS)
Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.
2011-12-01
Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because individual classes differed in scales at which they were best discriminated from others. Main classification challenges included a) presence of C3 grasses in C4-grass areas, particularly following harvesting of C4 reeds and b) mixtures of emergent, floating and submerged aquatic plants at sub-object and sub-pixel scales. We conclude that OBIA with advanced statistical classifiers offers useful instruments for landscape vegetation analyses, and that spatial scale considerations are critical in mapping PFTs, while multi-scale comparisons can be used to guide class selection. Future work will further apply fuzzy classification and field-collected spectral data for PFT analysis and compare results with MODIS PFT products.
Experimental Realization of a Quantum Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng
2015-04-01
The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.
NASA Astrophysics Data System (ADS)
Peng, Chong; Wang, Lun; Liao, T. Warren
2015-10-01
Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.
NASA Astrophysics Data System (ADS)
Cheng, Kai; Niu, Zhi-Chao; Wang, Robin C.; Rakowski, Richard; Bateman, Richard
2017-09-01
Smart machining has tremendous potential and is becoming one of new generation high value precision manufacturing technologies in line with the advance of Industry 4.0 concepts. This paper presents some innovative design concepts and, in particular, the development of four types of smart cutting tools, including a force-based smart cutting tool, a temperature-based internally-cooled cutting tool, a fast tool servo (FTS) and smart collets for ultraprecision and micro manufacturing purposes. Implementation and application perspectives of these smart cutting tools are explored and discussed particularly for smart machining against a number of industrial application requirements. They are contamination-free machining, machining of tool-wear-prone Si-based infra-red devices and medical applications, high speed micro milling and micro drilling, etc. Furthermore, implementation techniques are presented focusing on: (a) plug-and-produce design principle and the associated smart control algorithms, (b) piezoelectric film and surface acoustic wave transducers to measure cutting forces in process, (c) critical cutting temperature control in real-time machining, (d) in-process calibration through machining trials, (e) FE-based design and analysis of smart cutting tools, and (f) application exemplars on adaptive smart machining.
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
ERIC Educational Resources Information Center
Texas State Technical Coll., Waco.
This document is intended to help education and training institutions deliver the Machine Tool Advanced Skills Technology (MAST) curriculum to a variety of individuals and organizations. MAST consists of industry-specific skill standards and model curricula for 15 occupational specialty areas within the U.S. machine tool and metals-related…
García-López, David; Herrero, Azael J; González-Calvo, Gustavo; Rhea, Matthew R; Marín, Pedro J
2010-09-01
This study aimed to investigate the role of elastic resistance (ER) applied "in series" to a pulley-cable (PC) machine on the number of repetitions performed, kinematics parameters, and perceived exertion during a biceps-curl set to failure with a submaximal load (70% of the 1 repetition maximum). Twenty-one undergraduate students (17 men and 4 women) performed, on 2 different days, 1 biceps-curl set on the PC machine. Subjects were randomly assigned to complete 2 experimental conditions in a cross-over fashion: conventional PC mode or ER + PC mode. Results indicate ER applied "in series" to a PC machine significantly reduces (p < 0.05) the maximal number of repetitions and results in a smooth and consistent decline in mean acceleration throughout the set, in comparison to the conventional PC mode. Although no significant differences were found concerning intrarepetition kinematics, the ER trended to reduce (18.6%) the peak acceleration of the load. With a more uniformly distributed external resistance, a greater average muscle tension could have been achieved throughout the range of movement, leading to greater fatigue that could explain the lower number of maximal repetitions achieved. The application of force in a smooth, consistent fashion during each repetition of an exercise, while avoiding active deceleration, is expected to enhance the benefits of the resistance exercise, especially for those seeking greater increases in muscular hypertrophy.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Machine learning enhanced optical distance sensor
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Bressler, Susan B.; Edwards, Allison R.; Chalam, Kakarla V.; Bressler, Neil M.; Glassman, Adam R.; Jaffe, Glenn J.; Melia, Michele; Saggau, David D.; Plous, Oren Z.
2014-01-01
Importance Advances in retinal imaging have led to the development of optical coherence tomography (OCT) instruments that incorporate spectral domain (SD) technology. Understanding measurement variability and relationships between retinal thickness measurements obtained on different machines is critical for proper use in clinical trials and clinical settings. Objectives Evaluate reproducibility of retinal thickness measurements from OCT images obtained by time domain (TD) (Zeiss Stratus) and SD (Zeiss Cirrus and Heidelberg Spectralis) instruments and formulate equations to convert retinal thickness measurements from SD-OCT to equivalent values on TD-OCT. Design Cross-sectional observational study. Each study eye underwent two replicate Stratus scans followed by two replicate Cirrus or Spectralis (real time image registration utilized) scans centered on the fovea. Setting Private and institutional practices Participants Diabetic persons with at least one eye with central-involved diabetic macular edema (DME), defined as Stratus central subfield thickness (CST)≥250μm. An additional normative cohort, individuals with diabetes but without DME, was enrolled. Main Outcome Measure(s) OCT CST and macular volume Results The Bland-Altman coefficient of repeatability for relative change in CST (the degree of change that could be expected from measurement variability) was lower on Spectralis compared with Stratus and Cirrus scans (7%, 12–15%, and 14%, respectively). For each cohort, the initial Stratus CST was within 10% of the replicate Stratus measurement 92% of the time; the conversion equations predicted a Stratus CST within 10% of the observed thickness 86% and 89% of the time for Stratus/Cirrus and Stratus/Spectralis groups, respectively. The Bland-Altman limits of agreement for relative change in CST between machines (the degree of change that could be expected from measurement variability, combined within and between instrument variability) were 21% for Cirrus and 19% for Spectralis, comparing predicted versus actual Stratus measurement. Conclusions and Relevance Reproducibility appears better on Spectralis than Cirrus and Stratus. Conversion equations to transform Cirrus or Spectralis measurements to Stratus-equivalent values, within 10% of the observed Stratus thickness values, appear feasible. CST changes beyond 10% when using the same machine or 20% when switching machines, after conversion to Stratus equivalents, are likely due to a change in retinal thickness and not measurement error. PMID:25058482
Machine learning for the meta-analyses of microbial pathogens' volatile signatures.
Palma, Susana I C J; Traguedo, Ana P; Porteira, Ana R; Frias, Maria J; Gamboa, Hugo; Roque, Ana C A
2018-02-20
Non-invasive and fast diagnostic tools based on volatolomics hold great promise in the control of infectious diseases. However, the tools to identify microbial volatile organic compounds (VOCs) discriminating between human pathogens are still missing. Artificial intelligence is increasingly recognised as an essential tool in health sciences. Machine learning algorithms based in support vector machines and features selection tools were here applied to find sets of microbial VOCs with pathogen-discrimination power. Studies reporting VOCs emitted by human microbial pathogens published between 1977 and 2016 were used as source data. A set of 18 VOCs is sufficient to predict the identity of 11 microbial pathogens with high accuracy (77%), and precision (62-100%). There is one set of VOCs associated with each of the 11 pathogens which can predict the presence of that pathogen in a sample with high accuracy and precision (86-90%). The implemented pathogen classification methodology supports future database updates to include new pathogen-VOC data, which will enrich the classifiers. The sets of VOCs identified potentiate the improvement of the selectivity of non-invasive infection diagnostics using artificial olfaction devices.
Man-machine interface requirements - advanced technology
NASA Technical Reports Server (NTRS)
Remington, R. W.; Wiener, E. L.
1984-01-01
Research issues and areas are identified where increased understanding of the human operator and the interaction between the operator and the avionics could lead to improvements in the performance of current and proposed helicopters. Both current and advanced helicopter systems and avionics are considered. Areas critical to man-machine interface requirements include: (1) artificial intelligence; (2) visual displays; (3) voice technology; (4) cockpit integration; and (5) pilot work loads and performance.
Atomistic Design and Simulations of Nanoscale Machines and Assembly
NASA Technical Reports Server (NTRS)
Goddard, William A., III; Cagin, Tahir; Walch, Stephen P.
2000-01-01
Over the three years of this project, we made significant progress on critical theoretical and computational issues in nanoscale science and technology, particularly in:(1) Fullerenes and nanotubes, (2) Characterization of surfaces of diamond and silicon for NEMS applications, (3) Nanoscale machine and assemblies, (4) Organic nanostructures and dendrimers, (5) Nanoscale confinement and nanotribology, (6) Dynamic response of nanoscale structures nanowires (metals, tubes, fullerenes), (7) Thermal transport in nanostructures.
NASA Astrophysics Data System (ADS)
Tufillaro, Nicholas B.; Abbott, Tyler A.; Griffiths, David J.
1984-10-01
We examine the motion of an Atwood's Machine in which one of the masses is allowed to swing in a plane. Computer studies reveal a rich variety of trajectories. The orbits are classified (bounded, periodic, singular, and terminating), and formulas for the critical mass ratios are developed. Perturbative techniques yield good approximations to the computer-generated trajectories. The model constitutes a simple example of a nonlinear dynamical system with two degrees of freedom.
Mechatronics technology in predictive maintenance method
NASA Astrophysics Data System (ADS)
Majid, Nurul Afiqah A.; Muthalif, Asan G. A.
2017-11-01
This paper presents recent mechatronics technology that can help to implement predictive maintenance by combining intelligent and predictive maintenance instrument. Vibration Fault Simulation System (VFSS) is an example of mechatronics system. The focus of this study is the prediction on the use of critical machines to detect vibration. Vibration measurement is often used as the key indicator of the state of the machine. This paper shows the choice of the appropriate strategy in the vibration of diagnostic process of the mechanical system, especially rotating machines, in recognition of the failure during the working process. In this paper, the vibration signature analysis is implemented to detect faults in rotary machining that includes imbalance, mechanical looseness, bent shaft, misalignment, missing blade bearing fault, balancing mass and critical speed. In order to perform vibration signature analysis for rotating machinery faults, studies have been made on how mechatronics technology is used as predictive maintenance methods. Vibration Faults Simulation Rig (VFSR) is designed to simulate and understand faults signatures. These techniques are based on the processing of vibrational data in frequency-domain. The LabVIEW-based spectrum analyzer software is developed to acquire and extract frequency contents of faults signals. This system is successfully tested based on the unique vibration fault signatures that always occur in a rotating machinery.
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Hunker, Keith R.; Hartwig, Jason; Brown, Gerald V.
2017-01-01
The NASA Glenn Research Center (GRC) has been developing the high efficiency and high-power density superconducting (SC) electric machines in full support of electrified aircraft propulsion (EAP) systems for a future electric aircraft. A SC coil test rig has been designed and built to perform static and AC measurements on BSCCO, (RE)BCO, and YBCO high temperature superconducting (HTS) wire and coils at liquid nitrogen (LN2) temperature. In this paper, DC measurements on five SC coil configurations of various geometry in zero external magnetic field are measured to develop good measurement technique and to determine the critical current (Ic) and the sharpness (n value) of the super-to-normal transition. Also, standard procedures for coil design, fabrication, coil mounting, micro-volt measurement, cryogenic testing, current control, and data acquisition technique were established. Experimentally measured critical currents are compared with theoretical predicted values based on an electric-field criterion (Ec). Data here are essential to quantify the SC electric machine operation limits where the SC begins to exhibit non-zero resistance. All test data will be utilized to assess the feasibility of using HTS coils for the fully superconducting AC electric machine development for an aircraft electric propulsion system.
The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.
Nowak, Markus; Castellini, Claudio
2016-01-01
Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.
Classifying Black Hole States with Machine Learning
NASA Astrophysics Data System (ADS)
Huppenkothen, Daniela
2018-01-01
Galactic black hole binaries are known to go through different states with apparent signatures in both X-ray light curves and spectra, leading to important implications for accretion physics as well as our knowledge of General Relativity. Existing frameworks of classification are usually based on human interpretation of low-dimensional representations of the data, and generally only apply to fairly small data sets. Machine learning, in contrast, allows for rapid classification of large, high-dimensional data sets. In this talk, I will report on advances made in classification of states observed in Black Hole X-ray Binaries, focusing on the two sources GRS 1915+105 and Cygnus X-1, and show both the successes and limitations of using machine learning to derive physical constraints on these systems.
nu-Anomica: A Fast Support Vector Based Novelty Detection Technique
NASA Technical Reports Server (NTRS)
Das, Santanu; Bhaduri, Kanishka; Oza, Nikunj C.; Srivastava, Ashok N.
2009-01-01
In this paper we propose nu-Anomica, a novel anomaly detection technique that can be trained on huge data sets with much reduced running time compared to the benchmark one-class Support Vector Machines algorithm. In -Anomica, the idea is to train the machine such that it can provide a close approximation to the exact decision plane using fewer training points and without losing much of the generalization performance of the classical approach. We have tested the proposed algorithm on a variety of continuous data sets under different conditions. We show that under all test conditions the developed procedure closely preserves the accuracy of standard one-class Support Vector Machines while reducing both the training time and the test time by 5 - 20 times.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
Vine, Michelle M; Harrington, Daniel W; Butler, Alexandra; Patte, Karen; Godin, Katelyn; Leatherdale, Scott T
2017-04-20
We investigated the extent to which a sample of Ontario and Alberta secondary schools are being compliant with their respective provincial nutrition policies, in terms of the food and beverages sold in vending machines. This observational study used objective data on drinks and snacks from vending machines, collected over three years of the COMPASS study (2012/2013-2014/2015 school years). Drink (e.g., sugar-containing carbonated/non-carbonated soft drinks, sports drinks, etc.) and snack (e.g., chips, crackers, etc.) data were coded by number of units available, price, and location of vending machine(s) in the school. Univariate and bivariate analyses were undertaken using R version 3.2.3. In order to assess policy compliancy over time, nutritional information of products in vending machines was compared to nutrition standards set out in P/PM 150 in Ontario, and those set out in the Alberta Nutrition Guidelines for Children and Youth (2012) in Alberta. Results reveal a decline over time in the proportion of schools selling sugar-containing carbonated soft drinks (9% in 2012/2013 vs. 3% in 2014/2015), crackers (26% vs. 17%) and cake products (12% vs. 5%) in vending machines, and inconsistent changes in the proportion selling chips (53%, 67% and 65% over the three school years). Conversely, results highlight increases in the proportion of vending machines selling chocolate bars (7% vs. 13%) and cookies (21% vs. 40%) between the 2012/2013 and 2014/2015 school years. Nutritional standard policies were not adhered to in the majority of schools with respect to vending machines. There is a need for investment in formal monitoring and evaluation of school policies, and the provision of information and tools to support nutrition policy implementation.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets.
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected "signal"; (5) using several machine learning methods to test the "signal's" sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation.
Advantages of Synthetic Noise and Machine Learning for Analyzing Radioecological Data Sets
Shuryak, Igor
2017-01-01
The ecological effects of accidental or malicious radioactive contamination are insufficiently understood because of the hazards and difficulties associated with conducting studies in radioactively-polluted areas. Data sets from severely contaminated locations can therefore be small. Moreover, many potentially important factors, such as soil concentrations of toxic chemicals, pH, and temperature, can be correlated with radiation levels and with each other. In such situations, commonly-used statistical techniques like generalized linear models (GLMs) may not be able to provide useful information about how radiation and/or these other variables affect the outcome (e.g. abundance of the studied organisms). Ensemble machine learning methods such as random forests offer powerful alternatives. We propose that analysis of small radioecological data sets by GLMs and/or machine learning can be made more informative by using the following techniques: (1) adding synthetic noise variables to provide benchmarks for distinguishing the performances of valuable predictors from irrelevant ones; (2) adding noise directly to the predictors and/or to the outcome to test the robustness of analysis results against random data fluctuations; (3) adding artificial effects to selected predictors to test the sensitivity of the analysis methods in detecting predictor effects; (4) running a selected machine learning method multiple times (with different random-number seeds) to test the robustness of the detected “signal”; (5) using several machine learning methods to test the “signal’s” sensitivity to differences in analysis techniques. Here, we applied these approaches to simulated data, and to two published examples of small radioecological data sets: (I) counts of fungal taxa in samples of soil contaminated by the Chernobyl nuclear power plan accident (Ukraine), and (II) bacterial abundance in soil samples under a ruptured nuclear waste storage tank (USA). We show that the proposed techniques were advantageous compared with the methodology used in the original publications where the data sets were presented. Specifically, our approach identified a negative effect of radioactive contamination in data set I, and suggested that in data set II stable chromium could have been a stronger limiting factor for bacterial abundance than the radionuclides 137Cs and 99Tc. This new information, which was extracted from these data sets using the proposed techniques, can potentially enhance the design of radioactive waste bioremediation. PMID:28068401
Complex extreme learning machine applications in terahertz pulsed signals feature sets.
Yin, X-X; Hadjiloucas, S; Zhang, Y
2014-11-01
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Machinability of nickel based alloys using electrical discharge machining process
NASA Astrophysics Data System (ADS)
Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.
2018-04-01
The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.
The Spin Zone: Choosing Laundry Equipment.
ERIC Educational Resources Information Center
Milshtein, Amy
2003-01-01
Discusses whether or not a college or university should own its own laundry equipment or contract out laundry services, including machine maintenance, and outlines the advantages of different types of washing machines for the student housing setting. Also reviews issues related to payment methods. (SLD)
A performance study of sparse Cholesky factorization on INTEL iPSC/860
NASA Technical Reports Server (NTRS)
Zubair, M.; Ghose, M.
1992-01-01
The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.
Materials prediction via classification learning
Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; ...
2015-08-25
In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturallymore » uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. In conclusion, our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle.« less
Materials Prediction via Classification Learning
Balachandran, Prasanna V.; Theiler, James; Rondinelli, James M.; Lookman, Turab
2015-01-01
In the paradigm of materials informatics for accelerated materials discovery, the choice of feature set (i.e. attributes that capture aspects of structure, chemistry and/or bonding) is critical. Ideally, the feature sets should provide a simple physical basis for extracting major structural and chemical trends and furthermore, enable rapid predictions of new material chemistries. Orbital radii calculated from model pseudopotential fits to spectroscopic data are potential candidates to satisfy these conditions. Although these radii (and their linear combinations) have been utilized in the past, their functional forms are largely justified with heuristic arguments. Here we show that machine learning methods naturally uncover the functional forms that mimic most frequently used features in the literature, thereby providing a mathematical basis for feature set construction without a priori assumptions. We apply these principles to study two broad materials classes: (i) wide band gap AB compounds and (ii) rare earth-main group RM intermetallics. The AB compounds serve as a prototypical example to demonstrate our approach, whereas the RM intermetallics show how these concepts can be used to rapidly design new ductile materials. Our predictive models indicate that ScCo, ScIr, and YCd should be ductile, whereas each was previously proposed to be brittle. PMID:26304800
Celestial data routing network
NASA Astrophysics Data System (ADS)
Bordetsky, Alex
2000-11-01
Imagine that information processing human-machine network is threatened in a particular part of the world. Suppose that an anticipated threat of physical attacks could lead to disruption of telecommunications network management infrastructure and access capabilities for small geographically distributed groups engaged in collaborative operations. Suppose that small group of astronauts are exploring the solar planet and need to quickly configure orbital information network to support their collaborative work and local communications. The critical need in both scenarios would be a set of low-cost means of small team celestial networking. To the geographically distributed mobile collaborating groups such means would allow to maintain collaborative multipoint work, set up orbital local area network, and provide orbital intranet communications. This would be accomplished by dynamically assembling the network enabling infrastructure of the small satellite based router, satellite based Codec, and set of satellite based intelligent management agents. Cooperating single function pico satellites, acting as agents and personal switching devices together would represent self-organizing intelligent orbital network of cooperating mobile management nodes. Cooperative behavior of the pico satellite based agents would be achieved by comprising a small orbital artificial neural network capable of learning and restructing the networking resources in response to the anticipated threat.
A translator and simulator for the Burroughs D machine
NASA Technical Reports Server (NTRS)
Roberts, J.
1972-01-01
The D Machine is described as a small user microprogrammable computer designed to be a versatile building block for such diverse functions as: disk file controllers, I/O controllers, and emulators. TRANSLANG is an ALGOL-like language, which allows D Machine users to write microprograms in an English-like format as opposed to creating binary bit pattern maps. The TRANSLANG translator parses TRANSLANG programs into D Machine microinstruction bit patterns which can be executed on the D Machine simulator. In addition to simulation and translation, the two programs also offer several debugging tools, such as: a full set of diagnostic error messages, register dumps, simulated memory dumps, traces on instructions and groups of instructions, and breakpoints.
Traction sheave elevator, hoisting unit and machine space
Hakala, Harri; Mustalahti, Jorma; Aulanko, Esko
2000-01-01
Traction sheave elevator consisting of an elevator car moving along elevator guide rails, a counterweight moving along counterweight guide rails, a set of hoisting ropes (3) on which the elevator car and counterweight are suspended, and a drive machine unit (6) driving a traction sheave (7) acting on the hoisting ropes (3) and placed in the elevator shaft. The drive machine unit (6) is of a flat construction. A wall of the elevator shaft is provided with a machine space with its open side facing towards the shaft, the essential parts of the drive machine unit (6) being placed in the space. The hoisting unit (9) of the traction sheave elevator consists of a substantially discoidal drive machine unit (6) and an instrument panel (8) mounted on the frame (20) of the hoisting unit.
Paquerault, Sophie; Hardy, Paul T; Wersto, Nancy; Chen, John; Smith, Robert C
2010-09-01
The aim of this study was to explore different computerized models (the "machine") as a means to achieve optimal use of computer-aided detection (CAD) systems and to investigate whether these models can play a primary role in clinical decision making and possibly replace a clinician's subjective decision for combining his or her own assessment with that provided by a CAD system. Data previously collected from a fully crossed, multiple-reader, multiple-case observer study with and without CAD by seven observers asked to identify simulated small masses on two separate sets of 100 mammographic images (low-contrast and high-contrast sets; ie, low-contrast and high-contrast simulated masses added to random locations on normal mammograms) were used. This allowed testing two relative sensitivities between the observers and CAD. Seven models that combined detection assessments from CAD standalone, unaided read, and CAD-aided read (second read and concurrent read) were developed using the leave-one-out technique for training and testing. These models were personalized for each observer. Detection performance accuracies were analyzed using the area under a portion of the free-response receiver-operating characteristic curve (AUFC), sensitivity, and number of false-positives per image. For the low-contrast set, the use of computerized models resulted in significantly higher AUFCs compared to the unaided read mode for all readers, whereas the increased AUFCs between CAD-aided (second and concurrent reads; ie, decisions made by the readers) and unaided read modes were statistically significant for a majority, but not all, of the readers (four and five of the seven readers, respectively). For the high-contrast set, there were no significant trends in the AUFCs whether or not a model was used to combine the original reading modes. Similar results were observed when using sensitivity as the figure of merit. However, the average number of false-positives per image resulting from the computerized models remained the same as that obtained from the unaided read modes. Individual computerized models (the machine) that combine image assessments from CAD standalone, unaided read, and CAD-aided read can increase detection performance compared to the reading done by the observer. However, relative sensitivity (ie, the difference in sensitivity between CAD standalone and unaided read) was a critical factor that determined incremental improvement in decision making, whether made by the observer or using computerized models. Published by Elsevier Inc.
An empirically based steady state friction law and implications for fault stability
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Nielsen, S.; Violay, M.; Di Toro, G.
2016-04-01
Empirically based rate-and-state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1-20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest.
[Present-day metal-cutting tools and working conditions].
Kondratiuk, V P
1990-01-01
Polyfunctional machine-tools of a processing centre type are characterized by a set of hygienic advantages as compared to universal machine-tools. But low degree of mechanization and automation of some auxiliary processes, and constructional defects which decrease the ergonomic characteristics of the tools, involve labour intensity in multi-machine processing. The article specifies techniques of allowable noise level assessment, and proposes hygienic recommendations, some of which have been introduced into practice.
Machine learning for predicting soil classes in three semi-arid landscapes
Brungard, Colby W.; Boettinger, Janis L.; Duniway, Michael C.; Wills, Skye A.; Edwards, Thomas C.
2015-01-01
Mapping the spatial distribution of soil taxonomic classes is important for informing soil use and management decisions. Digital soil mapping (DSM) can quantitatively predict the spatial distribution of soil taxonomic classes. Key components of DSM are the method and the set of environmental covariates used to predict soil classes. Machine learning is a general term for a broad set of statistical modeling techniques. Many different machine learning models have been applied in the literature and there are different approaches for selecting covariates for DSM. However, there is little guidance as to which, if any, machine learning model and covariate set might be optimal for predicting soil classes across different landscapes. Our objective was to compare multiple machine learning models and covariate sets for predicting soil taxonomic classes at three geographically distinct areas in the semi-arid western United States of America (southern New Mexico, southwestern Utah, and northeastern Wyoming). All three areas were the focus of digital soil mapping studies. Sampling sites at each study area were selected using conditioned Latin hypercube sampling (cLHS). We compared models that had been used in other DSM studies, including clustering algorithms, discriminant analysis, multinomial logistic regression, neural networks, tree based methods, and support vector machine classifiers. Tested machine learning models were divided into three groups based on model complexity: simple, moderate, and complex. We also compared environmental covariates derived from digital elevation models and Landsat imagery that were divided into three different sets: 1) covariates selected a priori by soil scientists familiar with each area and used as input into cLHS, 2) the covariates in set 1 plus 113 additional covariates, and 3) covariates selected using recursive feature elimination. Overall, complex models were consistently more accurate than simple or moderately complex models. Random forests (RF) using covariates selected via recursive feature elimination was consistently the most accurate, or was among the most accurate, classifiers between study areas and between covariate sets within each study area. We recommend that for soil taxonomic class prediction, complex models and covariates selected by recursive feature elimination be used. Overall classification accuracy in each study area was largely dependent upon the number of soil taxonomic classes and the frequency distribution of pedon observations between taxonomic classes. Individual subgroup class accuracy was generally dependent upon the number of soil pedon observations in each taxonomic class. The number of soil classes is related to the inherent variability of a given area. The imbalance of soil pedon observations between classes is likely related to cLHS. Imbalanced frequency distributions of soil pedon observations between classes must be addressed to improve model accuracy. Solutions include increasing the number of soil pedon observations in classes with few observations or decreasing the number of classes. Spatial predictions using the most accurate models generally agree with expected soil–landscape relationships. Spatial prediction uncertainty was lowest in areas of relatively low relief for each study area.
Use of IT platform in determination of efficiency of mining machines
NASA Astrophysics Data System (ADS)
Brodny, Jarosław; Tutak, Magdalena
2018-01-01
Determination of effective use of mining devices has very significant meaning for mining enterprises. High costs of their purchase and tenancy cause that these enterprises tend to the best use of possessed technical potential. However, specifics of mining production causes that this process not always proceeds without interferences. Practical experiences show that determination of objective measure of utilization of machine in mining enterprise is not simple. In the paper a proposition for solution of this problem is presented. For this purpose an IT platform and overall efficiency model OEE were used. This model enables to evaluate the machine in a range of its availability performance and quality of product, and constitutes a quantitative tool of TPM strategy. Adapted to the specificity of mining branch the OEE model together with acquired data from industrial automatic system enabled to determine the partial indicators and overall efficiency of tested machines. Studies were performed for a set of machines directly use in coal exploitation process. They were: longwall-shearer and armoured face conveyor, and beam stage loader. Obtained results clearly indicate that degree of use of machines by mining enterprises are unsatisfactory. Use of IT platforms will significantly facilitate the process of registration, archiving and analytical processing of the acquired data. In the paper there is presented methodology of determination of partial indices and total OEE together with a practical example of its application for investigated machines set. Also IT platform was characterized for its construction, function and application.
The Time Machine: Writing Historical Fiction.
ERIC Educational Resources Information Center
Karr, Kathleen
2000-01-01
Discusses historical fiction for children and young adults from a writer's point of view and equates it to a time machine into the past. Considers the books that influenced the writer; larger-than-life characters; story ideas; and research to really know and feel the setting. (LRW)
NASA Technical Reports Server (NTRS)
Hays, Dan
1987-01-01
Applications of linguistic principles to potential problems of human and machine communication in space settings are discussed. Variations in language among speakers of different backgrounds and change in language forms resulting from new experiences or reduced contact with other groups need to be considered in the design of intelligent machine systems.
NASA Astrophysics Data System (ADS)
Liu, Shuang; Liu, Fei; Hu, Shaohua; Yin, Zhenbiao
The major power information of the main transmission system in machine tools (MTSMT) during machining process includes effective output power (i.e. cutting power), input power and power loss from the mechanical transmission system, and the main motor power loss. These information are easy to obtain in the lab but difficult to evaluate in a manufacturing process. To solve this problem, a separation method is proposed here to extract the MTSMT power information during machining process. In this method, the energy flow and the mathematical models of major power information of MTSMT during the machining process are set up first. Based on the mathematical models and the basic data tables obtained from experiments, the above mentioned power information during machining process can be separated just by measuring the real time total input power of the spindle motor. The operation program of this method is also given.
Allocating dissipation across a molecular machine cycle to maximize flux
Brown, Aidan I.; Sivak, David A.
2017-01-01
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016
Belekar, Vilas; Lingineni, Karthik; Garg, Prabha
2015-01-01
The breast cancer resistant protein (BCRP) is an important transporter and its inhibitors play an important role in cancer treatment by improving the oral bioavailability as well as blood brain barrier (BBB) permeability of anticancer drugs. In this work, a computational model was developed to predict the compounds as BCRP inhibitors or non-inhibitors. Various machine learning approaches like, support vector machine (SVM), k-nearest neighbor (k-NN) and artificial neural network (ANN) were used to develop the models. The Matthews correlation coefficients (MCC) of developed models using ANN, k-NN and SVM are 0.67, 0.71 and 0.77, and prediction accuracies are 85.2%, 88.3% and 90.8% respectively. The developed models were tested with a test set of 99 compounds and further validated with external set of 98 compounds. Distribution plot analysis and various machine learning models were also developed based on druglikeness descriptors. Applicability domain is used to check the prediction reliability of the new molecules.
1989-04-20
International Business Machines Corporation) IBM Development System for the Ada Language, VN11/CMS Ada Compiler, Version 2.1.1, Wright-Patterson AFB, IBM 3083...890420W1.10073 International Business Machines Corporation IBM Development System for the Ada Language VM/CMS Ada Compiler Version 2.1.1 IBM 3083... International Business Machines Corporation and reviewed by the validation team. The compiler was tested using all default option settings except for the
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
LeMoyne, Robert; Tomycz, Nestor; Mastroianni, Timothy; McCandless, Cyrus; Cozza, Michael; Peduto, David
2015-01-01
Essential tremor (ET) is a highly prevalent movement disorder. Patients with ET exhibit a complex progressive and disabling tremor, and medical management often fails. Deep brain stimulation (DBS) has been successfully applied to this disorder, however there has been no quantifiable way to measure tremor severity or treatment efficacy in this patient population. The quantified amelioration of kinetic tremor via DBS is herein demonstrated through the application of a smartphone (iPhone) as a wireless accelerometer platform. The recorded acceleration signal can be obtained at a setting of the subject's convenience and conveyed by wireless transmission through the Internet for post-processing anywhere in the world. Further post-processing of the acceleration signal can be classified through a machine learning application, such as the support vector machine. Preliminary application of deep brain stimulation with a smartphone for acquisition of a feature set and machine learning for classification has been successfully applied. The support vector machine achieved 100% classification between deep brain stimulation in `on' and `off' mode based on the recording of an accelerometer signal through a smartphone as a wireless accelerometer platform.
Proposed algorithm to improve job shop production scheduling using ant colony optimization method
NASA Astrophysics Data System (ADS)
Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari
2017-12-01
This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
Ductile and brittle transition behavior of titanium alloys in ultra-precision machining.
Yip, W S; To, S
2018-03-02
Titanium alloys are extensively applied in biomedical industries due to their excellent material properties. However, they are recognized as difficult to cut materials due to their low thermal conductivity, which induces a complexity to their deformation mechanisms and restricts precise productions. This paper presents a new observation about the removal regime of titanium alloys. The experimental results, including the chip formation, thrust force signal and surface profile, showed that there was a critical cutting distance to achieve better surface integrity of machined surface. The machined areas with better surface roughness were located before the clear transition point, defining as the ductile to brittle transition. The machined area at the brittle region displayed the fracture deformation which showed cracks on the surface edge. The relationship between depth of cut and the ductile to brittle transaction behavior of titanium alloys in ultra-precision machining(UPM) was also revealed in this study, it showed that the ductile to brittle transaction behavior of titanium alloys occurred mainly at relatively small depth of cut. The study firstly defines the ductile to brittle transition behavior of titanium alloys in UPM, contributing the information of ductile machining as an optimal machining condition for precise productions of titanium alloys.
Li, Ning; Cao, Chao; Wang, Cong
2017-06-15
Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Anderson, Frances; Smith, Tom; Fahlbusch, Stephen; Choma, Robert; Wong, Lut
2005-04-01
Achieving consistent and correct database cases is crucial to the correct evaluation of any computer-assisted diagnostic (CAD) paradigm. This paper describes the application of artificial intelligence (AI), knowledge engineering (KE) and knowledge representation (KR) to a data set of ~2500 cases from six separate hospitals, with the objective of removing/reducing inconsistent outlier data. Several support vector machine (SVM) kernels were used to measure diagnostic performance of the original and a "cleaned" data set. Specifically, KE and ER principles were applied to the two data sets which were re-examined with respect to the environment and agents. One data set was found to contain 25 non-characterizable sets. The other data set contained 180 non-characterizable sets. CAD system performance was measured with both the original and "cleaned" data sets using two SVM kernels as well as a multivariate probabilistic neural network (PNN). Results demonstrated: (i) a 10% average improvement in overall Az and (ii) approximately a 50% average improvement in partial Az.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
Support Vector Machine-Based Endmember Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Anthony M; Archibald, Richard K
Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less
An evaluation of open set recognition for FLIR images
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2015-05-01
Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.
Yugandhar, K; Gromiha, M Michael
2014-09-01
Protein-protein interactions are intrinsic to virtually every cellular process. Predicting the binding affinity of protein-protein complexes is one of the challenging problems in computational and molecular biology. In this work, we related sequence features of protein-protein complexes with their binding affinities using machine learning approaches. We set up a database of 185 protein-protein complexes for which the interacting pairs are heterodimers and their experimental binding affinities are available. On the other hand, we have developed a set of 610 features from the sequences of protein complexes and utilized Ranker search method, which is the combination of Attribute evaluator and Ranker method for selecting specific features. We have analyzed several machine learning algorithms to discriminate protein-protein complexes into high and low affinity groups based on their Kd values. Our results showed a 10-fold cross-validation accuracy of 76.1% with the combination of nine features using support vector machines. Further, we observed accuracy of 83.3% on an independent test set of 30 complexes. We suggest that our method would serve as an effective tool for identifying the interacting partners in protein-protein interaction networks and human-pathogen interactions based on the strength of interactions. © 2014 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Lilienfeld, O. Anatole; Ramakrishnan, Raghunathan; Rupp, Matthias
We introduce a fingerprint representation of molecules based on a Fourier series of atomic radial distribution functions. This fingerprint is unique (except for chirality), continuous, and differentiable with respect to atomic coordinates and nuclear charges. It is invariant with respect to translation, rotation, and nuclear permutation, and requires no preconceived knowledge about chemical bonding, topology, or electronic orbitals. As such, it meets many important criteria for a good molecular representation, suggesting its usefulness for machine learning models of molecular properties trained across chemical compound space. To assess the performance of this new descriptor, we have trained machine learning models ofmore » molecular enthalpies of atomization for training sets with up to 10 k organic molecules, drawn at random from a published set of 134 k organic molecules with an average atomization enthalpy of over 1770 kcal/mol. We validate the descriptor on all remaining molecules of the 134 k set. For a training set of 10 k molecules, the fingerprint descriptor achieves a mean absolute error of 8.0 kcal/mol. This is slightly worse than the performance attained using the Coulomb matrix, another popular alternative, reaching 6.2 kcal/mol for the same training and test sets. (c) 2015 Wiley Periodicals, Inc.« less
Go, Taesik; Byeon, Hyeokjun; Lee, Sang Joon
2018-04-30
Cell types of erythrocytes should be identified because they are closely related to their functionality and viability. Conventional methods for classifying erythrocytes are time consuming and labor intensive. Therefore, an automatic and accurate erythrocyte classification system is indispensable in healthcare and biomedical fields. In this study, we proposed a new label-free sensor for automatic identification of erythrocyte cell types using a digital in-line holographic microscopy (DIHM) combined with machine learning algorithms. A total of 12 features, including information on intensity distributions, morphological descriptors, and optical focusing characteristics, is quantitatively obtained from numerically reconstructed holographic images. All individual features for discocytes, echinocytes, and spherocytes are statistically different. To improve the performance of cell type identification, we adopted several machine learning algorithms, such as decision tree model, support vector machine, linear discriminant classification, and k-nearest neighbor classification. With the aid of these machine learning algorithms, the extracted features are effectively utilized to distinguish erythrocytes. Among the four tested algorithms, the decision tree model exhibits the best identification performance for the training sets (n = 440, 98.18%) and test sets (n = 190, 97.37%). This proposed methodology, which smartly combined DIHM and machine learning, would be helpful for sensing abnormal erythrocytes and computer-aided diagnosis of hematological diseases in clinic. Copyright © 2017 Elsevier B.V. All rights reserved.
Efficient production by laser materials processing integrated into metal cutting machines
NASA Astrophysics Data System (ADS)
Wiedmaier, M.; Meiners, E.; Dausinger, Friedrich; Huegel, Helmut
1994-09-01
Beam guidance of high power YAG-laser (cw, pulsed, Q-switched) with average powers up to 2000 W by flexible glass fibers facilitates the integration of the laser beam as an additional tool into metal cutting machines. Hence, technologies like laser cutting, joining, hardening, caving, structuring of surfaces and laser-marking can be applied directly inside machining centers in one setting, thereby reducing the flow of workpieces resulting in a lowering of costs and production time. Furthermore, materials with restricted machinability--especially hard materials like ceramics, hard metals or sintered alloys--can be shaped by laser-caving or laser assisted machining. Altogether, the flexibility of laser integrated machining centers is substantially increased or the efficiency of a production line is raised by time-savings or extended feasibilities with techniques like hardening, welding or caving.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
NASA Astrophysics Data System (ADS)
Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.
The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.
Manuyakorn, Wiparat; Padungpak, Savitree; Luecha, Orawin; Kamchaisatian, Wasu; Sasisakulporn, Cherapat; Vilaiyuk, Soamarat; Monyakul, Veerapol; Benjaponpitak, Suwat
2015-06-01
House dust mite avoidance is advised in dust mite sensitized patients to decrease the risk to develop allergic symptoms. Maintaining a relative humidity (RH) of less than 50% in households is recommended to prevent dust mite proliferation. To investigate the efficacy of a novel temperature and humidity machine to control the level of dust mite allergens and total nasal symptom score (TNSS) in dust mite sensitized allergic rhinitis children. Children (8-15 years) with dust mite sensitized persistent allergic rhinitis (AR) were enrolled. The temperature and humidity control machine was installed in the bedroom where the enrolled children stayed for 6 months. TNSS was assessed before and every month after machine set up and the level of dust mite allergen (Der p 1 and Der f 1) from the mattress were measured before and every 2 months after machine set up using enzyme-linked immunosorbent assay (ELISA). A total of 7 children were enrolled. Noticeable reduction of Der f 1 was observed as early as 2 months after installing the machine, but proper significant differences appeared 4 months after and remained low until the end of the experiment (p <0.05). Although no correlation was observed between TNSS and the level of dust mite allergens, there was a significant reduction in TNSS at 2 and 4 months (p <0.05) and 70% of the patients were able to stop using their intranasal corticosteroids by the end of the experiment. The level of house dust mite in mattresses was significantly reduced after using the temperature and humidity control machine. This machine may be used as an effective tool to control clinical symptoms of dust mite sensitized AR children.
NASA Astrophysics Data System (ADS)
Paradis, Daniel; Lefebvre, René; Gloaguen, Erwan; Rivera, Alfonso
2015-01-01
The spatial heterogeneity of hydraulic conductivity (K) exerts a major control on groundwater flow and solute transport. The heterogeneous spatial distribution of K can be imaged using indirect geophysical data as long as reliable relations exist to link geophysical data to K. This paper presents a nonparametric learning machine approach to predict aquifer K from cone penetrometer tests (CPT) coupled with a soil moisture and resistivity probe (SMR) using relevance vector machines (RVMs). The learning machine approach is demonstrated with an application to a heterogeneous unconsolidated littoral aquifer in a 12 km2 subwatershed, where relations between K and multiparameters CPT/SMR soundings appear complex. Our approach involved fuzzy clustering to define hydrofacies (HF) on the basis of CPT/SMR and K data prior to the training of RVMs for HFs recognition and K prediction on the basis of CPT/SMR data alone. The learning machine was built from a colocated training data set representative of the study area that includes K data from slug tests and CPT/SMR data up-scaled at a common vertical resolution of 15 cm with K data. After training, the predictive capabilities of the learning machine were assessed through cross validation with data withheld from the training data set and with K data from flowmeter tests not used during the training process. Results show that HF and K predictions from the learning machine are consistent with hydraulic tests. The combined use of CPT/SMR data and RVM-based learning machine proved to be powerful and efficient for the characterization of high-resolution K heterogeneity for unconsolidated aquifers.
Guided Text Search Using Adaptive Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Symons, Christopher T; Senter, James K
This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interactsmore » with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.« less
Multivariate Analysis and Machine Learning in Cerebral Palsy Research
Zhang, Jing
2017-01-01
Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP. PMID:29312134
Multivariate Analysis and Machine Learning in Cerebral Palsy Research.
Zhang, Jing
2017-01-01
Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP.
Machine-learning-based Brokers for Real-time Classification of the LSST Alert Stream
NASA Astrophysics Data System (ADS)
Narayan, Gautham; Zaidi, Tayeb; Soraisam, Monika D.; Wang, Zhe; Lochner, Michelle; Matheson, Thomas; Saha, Abhijit; Yang, Shuo; Zhao, Zhenge; Kececioglu, John; Scheidegger, Carlos; Snodgrass, Richard T.; Axelrod, Tim; Jenness, Tim; Maier, Robert S.; Ridgway, Stephen T.; Seaman, Robert L.; Evans, Eric Michael; Singh, Navdeep; Taylor, Clark; Toeniskoetter, Jackson; Welch, Eric; Zhu, Songzhe; The ANTARES Collaboration
2018-05-01
The unprecedented volume and rate of transient events that will be discovered by the Large Synoptic Survey Telescope (LSST) demand that the astronomical community update its follow-up paradigm. Alert-brokers—automated software system to sift through, characterize, annotate, and prioritize events for follow-up—will be critical tools for managing alert streams in the LSST era. The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is one such broker. In this work, we develop a machine learning pipeline to characterize and classify variable and transient sources only using the available multiband optical photometry. We describe three illustrative stages of the pipeline, serving the three goals of early, intermediate, and retrospective classification of alerts. The first takes the form of variable versus transient categorization, the second a multiclass typing of the combined variable and transient data set, and the third a purity-driven subtyping of a transient class. Although several similar algorithms have proven themselves in simulations, we validate their performance on real observations for the first time. We quantitatively evaluate our pipeline on sparse, unevenly sampled, heteroskedastic data from various existing observational campaigns, and demonstrate very competitive classification performance. We describe our progress toward adapting the pipeline developed in this work into a real-time broker working on live alert streams from time-domain surveys.
Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2018-01-01
The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Man-Machine Integrated Design and Analysis System (MIDAS): Functional Overview
NASA Technical Reports Server (NTRS)
Corker, Kevin; Neukom, Christian
1998-01-01
Included in the series of screen print-outs illustrates the structure and function of the Man-Machine Integrated Design and Analysis System (MIDAS). Views into the use of the system and editors are featured. The use-case in this set of graphs includes the development of a simulation scenario.
ERIC Educational Resources Information Center
Huang, Yifen
2010-01-01
Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…
40 CFR 60.185 - Monitoring of operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Lead... reverberatory furnace, or sintering machine discharge end. The span of this system shall be set at 80 to 100... discharged into the atmosphere from any sintering machine, electric furnace or converter subject to § 60.183...
T-wave end detection using neural networks and Support Vector Machines.
Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román
2018-05-01
In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
A machine learning evaluation of an artificial immune system.
Glickman, Matthew; Balthrop, Justin; Forrest, Stephanie
2005-01-01
ARTIS is an artificial immune system framework which contains several adaptive mechanisms. LISYS is a version of ARTIS specialized for the problem of network intrusion detection. The adaptive mechanisms of LISYS are characterized in terms of their machine-learning counterparts, and a series of experiments is described, each of which isolates a different mechanism of LISYS and studies its contribution to the system's overall performance. The experiments were conducted on a new data set, which is more recent and realistic than earlier data sets. The network intrusion detection problem is challenging because it requires one-class learning in an on-line setting with concept drift. The experiments confirm earlier experimental results with LISYS, and they study in detail how LISYS achieves success on the new data set.
NASA Technical Reports Server (NTRS)
1992-01-01
Organon Teknika Corporation's REDY 2000 dialysis machine employs technology originally developed under NASA contract by Marquardt Corporation. The chemical process developed during the project could be applied to removing toxic waste from used dialysis fluid. This discovery led to the development of a kidney dialysis machine using "sorbent" dialysis, a method of removing urea from human blood by treating a dialysate solution. The process saves electricity and, because the need for a continuous water supply is eliminated, the patient has greater freedom.
Human factors issues for interstellar spacecraft
NASA Technical Reports Server (NTRS)
Cohen, Marc M.; Brody, Adam R.
1991-01-01
Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.
Content addressable memory project
NASA Technical Reports Server (NTRS)
Hall, Josh; Levy, Saul; Smith, D.; Wei, S.; Miyake, K.; Murdocca, M.
1991-01-01
The progress on the Rutgers CAM (Content Addressable Memory) Project is described. The overall design of the system is completed at the architectural level and described. The machine is composed of two kinds of cells: (1) the CAM cells which include both memory and processor, and support local processing within each cell; and (2) the tree cells, which have smaller instruction set, and provide global processing over the CAM cells. A parameterized design of the basic CAM cell is completed. Progress was made on the final specification of the CPS. The machine architecture was driven by the design of algorithms whose requirements are reflected in the resulted instruction set(s). A few of these algorithms are described.
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
Beyond adaptive-critic creative learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Liao, Xiaoqun; Cao, Ming; Hall, Ernest L.
2001-10-01
Intelligent industrial and mobile robots may be considered proven technology in structured environments. Teach programming and supervised learning methods permit solutions to a variety of applications. However, we believe that to extend the operation of these machines to more unstructured environments requires a new learning method. Both unsupervised learning and reinforcement learning are potential candidates for these new tasks. The adaptive critic method has been shown to provide useful approximations or even optimal control policies to non-linear systems. The purpose of this paper is to explore the use of new learning methods that goes beyond the adaptive critic method for unstructured environments. The adaptive critic is a form of reinforcement learning. A critic element provides only high level grading corrections to a cognition module that controls the action module. In the proposed system the critic's grades are modeled and forecasted, so that an anticipated set of sub-grades are available to the cognition model. The forecasting grades are interpolated and are available on the time scale needed by the action model. The success of the system is highly dependent on the accuracy of the forecasted grades and adaptability of the action module. Examples from the guidance of a mobile robot are provided to illustrate the method for simple line following and for the more complex navigation and control in an unstructured environment. The theory presented that is beyond the adaptive critic may be called creative theory. Creative theory is a form of learning that models the highest level of human learning - imagination. The application of the creative theory appears to not only be to mobile robots but also to many other forms of human endeavor such as educational learning and business forecasting. Reinforcement learning such as the adaptive critic may be applied to known problems to aid in the discovery of their solutions. The significance of creative theory is that it permits the discovery of the unknown problems, ones that are not yet recognized but may be critical to survival or success.
Setting analyst: A practical harvest planning technique
Olivier R.M. Halleux; W. Dale Greene
2001-01-01
Setting Analyst is an ArcView extension that facilitates practical harvest planning for ground-based systems. By modeling the travel patterns of ground-based machines, it compares different harvesting settings based on projected average skidding distance, logging costs, and site disturbance levels. Setting Analyst uses information commonly available to consulting...
Young, Sean D; Yu, Wenchao; Wang, Wei
2017-02-01
"Social big data" from technologies such as social media, wearable devices, and online searches continue to grow and can be used as tools for HIV research. Although researchers can uncover patterns and insights associated with HIV trends and transmission, the review process is time consuming and resource intensive. Machine learning methods derived from computer science might be used to assist HIV domain experts by learning how to rapidly and accurately identify patterns associated with HIV from a large set of social data. Using an existing social media data set that was associated with HIV and coded by an HIV domain expert, we tested whether 4 commonly used machine learning methods could learn the patterns associated with HIV risk behavior. We used the 10-fold cross-validation method to examine the speed and accuracy of these models in applying that knowledge to detect HIV content in social media data. Logistic regression and random forest resulted in the highest accuracy in detecting HIV-related social data (85.3%), whereas the Ridge Regression Classifier resulted in the lowest accuracy. Logistic regression yielded the fastest processing time (16.98 seconds). Machine learning can enable social big data to become a new and important tool in HIV research, helping to create a new field of "digital HIV epidemiology." If a domain expert can identify patterns in social data associated with HIV risk or HIV transmission, machine learning models could quickly and accurately learn those associations and identify potential HIV patterns in large social data sets.
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-01-01
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. PMID:27754380
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-10-13
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.
Speech recognition technology: an outlook for human-to-machine interaction.
Erdel, T; Crooks, S
2000-01-01
Speech recognition, as an enabling technology in healthcare-systems computing, is a topic that has been discussed for quite some time, but is just now coming to fruition. Traditionally, speech-recognition software has been constrained by hardware, but improved processors and increased memory capacities are starting to remove some of these limitations. With these barriers removed, companies that create software for the healthcare setting have the opportunity to write more successful applications. Among the criticisms of speech-recognition applications are the high rates of error and steep training curves. However, even in the face of such negative perceptions, there remains significant opportunities for speech recognition to allow healthcare providers and, more specifically, physicians, to work more efficiently and ultimately spend more time with their patients and less time completing necessary documentation. This article will identify opportunities for inclusion of speech-recognition technology in the healthcare setting and examine major categories of speech-recognition software--continuous speech recognition, command and control, and text-to-speech. We will discuss the advantages and disadvantages of each area, the limitations of the software today, and how future trends might affect them.
Design of off-statistics axial-flow fans by means of vortex law optimization
NASA Astrophysics Data System (ADS)
Lazari, Andrea; Cattanei, Andrea
2014-12-01
Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Implementation and performance of parallel Prolog interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, S.; Kale, L.V.; Balkrishna, R.
1988-01-01
In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.
NASA Astrophysics Data System (ADS)
van Rheenen, Arthur D.; Taule, Petter; Thomassen, Jan Brede; Madsen, Eirik Blix
2018-04-01
We present Minimum-Resolvable Temperature Difference (MRTD) curves obtained by letting an ensemble of observers judge how many of the six four-bar patterns they can "see" in a set of images taken with different bar-to-background contrasts. The same images are analyzed using elemental signal analysis algorithms and machine-analysis based MRTD curves are obtained. We show that by adjusting the minimum required signal-to-noise ratio the machine-based MRTDs are very similar to the ones obtained with the help of the human observers.
NASA Technical Reports Server (NTRS)
Byman, J. E.
1985-01-01
A brief history of aircraft production techniques is given. A flexible machining cell is then described. It is a computer controlled system capable of performing 4-axis machining part cleaning, dimensional inspection and materials handling functions in an unmanned environment. The cell was designed to: allow processing of similar and dissimilar parts in random order without disrupting production; allow serial (one-shipset-at-a-time) manufacturing; reduce work-in-process inventory; maximize machine utilization through remote set-up; maximize throughput and minimize labor.
Simulation model of a single-stage lithium bromide-water absorption cooling unit
NASA Technical Reports Server (NTRS)
Miao, D.
1978-01-01
A computer model of a LiBr-H2O single-stage absorption machine was developed. The model, utilizing a given set of design data such as water-flow rates and inlet or outlet temperatures of these flow rates but without knowing the interior characteristics of the machine (heat transfer rates and surface areas), can be used to predict or simulate off-design performance. Results from 130 off-design cases for a given commercial machine agree with the published data within 2 percent.
2015-07-01
annex. iii Self-defense testing was limited to structural test firing from each machine gun mount and an ammunition resupply drill. Robust self...provided in the classified annex. Self- 8 defense testing was limited to structural test firing from each machine gun mount and a single...Caliber Machine Gun Mount Structural Test Fire November 2014 San Diego, Offshore Ship Weapons Range Operating Independently 9 Section Three
Building gene expression profile classifiers with a simple and efficient rejection option in R.
Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco; Savino, Alessandro; Hafeezurrehman, Hafeez
2011-01-01
The collection of gene expression profiles from DNA microarrays and their analysis with pattern recognition algorithms is a powerful technology applied to several biological problems. Common pattern recognition systems classify samples assigning them to a set of known classes. However, in a clinical diagnostics setup, novel and unknown classes (new pathologies) may appear and one must be able to reject those samples that do not fit the trained model. The problem of implementing a rejection option in a multi-class classifier has not been widely addressed in the statistical literature. Gene expression profiles represent a critical case study since they suffer from the curse of dimensionality problem that negatively reflects on the reliability of both traditional rejection models and also more recent approaches such as one-class classifiers. This paper presents a set of empirical decision rules that can be used to implement a rejection option in a set of multi-class classifiers widely used for the analysis of gene expression profiles. In particular, we focus on the classifiers implemented in the R Language and Environment for Statistical Computing (R for short in the remaining of this paper). The main contribution of the proposed rules is their simplicity, which enables an easy integration with available data analysis environments. Since in the definition of a rejection model tuning of the involved parameters is often a complex and delicate task, in this paper we exploit an evolutionary strategy to automate this process. This allows the final user to maximize the rejection accuracy with minimum manual intervention. This paper shows how the use of simple decision rules can be used to help the use of complex machine learning algorithms in real experimental setups. The proposed approach is almost completely automated and therefore a good candidate for being integrated in data analysis flows in labs where the machine learning expertise required to tune traditional classifiers might not be available.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines
NASA Astrophysics Data System (ADS)
Sakaue, T.; Kapral, R.; Mikhailov, A. S.
2010-06-01
Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.
Machine safety: proper safeguarding techniques.
Martin, K J
1992-06-01
1. OSHA mandates certain safeguarding of machinery to prevent accidents and protect machine operators. OSHA specifies moving parts that must be guarded and sets criteria for the guards. 2. A 1989 OSHA standard for lockout/tagout requires locking the energy source during maintenance, periodically inspecting for power transmission, and training maintenance workers. 3. In an amputation emergency, first aid for cardiopulmonary resuscitation, shock, and bleeding are the first considerations. The amputated part should be wrapped in moist gauze, placed in a sealed plastic bag, and placed in a container of 50% water and 50% ice for transport. 4. The role of the occupational health nurse in machine safety is to conduct worksite analyses to identify proper safeguarding and to communicate deficiencies to appropriate personnel; to train workers in safe work practices and observe compliance in the use of machine guards; to provide care to workers injured by machines; and to reinforce safe work practices among machine operators.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold D. Anderson, John T. Williams
2009-07-01
The existing control system for the RITS-6, a 20-MA 3-MV pulsed-power accelerator located at Sandia National Laboratories, was built as a system of analog switches because the operators needed to be close enough to the machine to hear pulsed-power breakdowns, yet the electromagnetic pulse (EMP) emitted would disable any processor-based solutions. The resulting system requires operators to activate and deactivate a series of 110-V relays manually in a complex order. The machine is sensitive to both the order of operation and the time taken between steps. A mistake in either case would cause a misfire and possible machine damage. Basedmore » on these constraints, a field-programmable gate array (FPGA) was chosen as the core of a proposed upgrade to the control system. An FPGA is a series of logic elements connected during programming. Based on their connections, the elements can mimic primitive logic elements, a process called synthesis. The circuit is static; all paths exist simultaneously and do not depend on a processor. This should make it less sensitive to EMP. By shielding it and using good electromagnetic interference-reduction practices, it should continue to operate well in the electrically noisy environment. The FPGA has two advantages over the existing system. In manual operation mode, the synthesized logic gates keep the operators in sequence. In addition, a clock signal and synthesized countdown circuit provides an automated sequence, with adjustable delays, for quickly executing the time-critical portions of charging and firing. The FPGA is modeled as a set of states, each state being a unique set of values for the output signals. The state is determined by the input signals, and in the automated segment by the value of the synthesized countdown timer, with the default mode placing the system in a safe configuration. Unlike a processor-based system, any system stimulus that results in an abort situation immediately executes a shutdown, with only a tens-of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.« less
Quantitative forecasting of PTSD from early trauma responses: a Machine Learning application.
Galatzer-Levy, Isaac R; Karstoft, Karen-Inge; Statnikov, Alexander; Shalev, Arieh Y
2014-12-01
There is broad interest in predicting the clinical course of mental disorders from early, multimodal clinical and biological information. Current computational models, however, constitute a significant barrier to realizing this goal. The early identification of trauma survivors at risk of post-traumatic stress disorder (PTSD) is plausible given the disorder's salient onset and the abundance of putative biological and clinical risk indicators. This work evaluates the ability of Machine Learning (ML) forecasting approaches to identify and integrate a panel of unique predictive characteristics and determine their accuracy in forecasting non-remitting PTSD from information collected within 10 days of a traumatic event. Data on event characteristics, emergency department observations, and early symptoms were collected in 957 trauma survivors, followed for fifteen months. An ML feature selection algorithm identified a set of predictors that rendered all others redundant. Support Vector Machines (SVMs) as well as other ML classification algorithms were used to evaluate the forecasting accuracy of i) ML selected features, ii) all available features without selection, and iii) Acute Stress Disorder (ASD) symptoms alone. SVM also compared the prediction of a) PTSD diagnostic status at 15 months to b) posterior probability of membership in an empirically derived non-remitting PTSD symptom trajectory. Results are expressed as mean Area Under Receiver Operating Characteristics Curve (AUC). The feature selection algorithm identified 16 predictors, present in ≥ 95% cross-validation trials. The accuracy of predicting non-remitting PTSD from that set (AUC = .77) did not differ from predicting from all available information (AUC = .78). Predicting from ASD symptoms was not better then chance (AUC = .60). The prediction of PTSD status was less accurate than that of membership in a non-remitting trajectory (AUC = .71). ML methods may fill a critical gap in forecasting PTSD. The ability to identify and integrate unique risk indicators makes this a promising approach for developing algorithms that infer probabilistic risk of chronic posttraumatic stress psychopathology based on complex sources of biological, psychological, and social information. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Hypothermic machine perfusion in kidney transplantation.
De Deken, Julie; Kocabayoglu, Peri; Moers, Cyril
2016-06-01
This article summarizes novel developments in hypothermic machine perfusion (HMP) as an organ preservation modality for kidneys recovered from deceased donors. HMP has undergone a renaissance in recent years. This renewed interest has arisen parallel to a shift in paradigms; not only optimal preservation of an often marginal quality graft is required, but also improved graft function and tools to predict the latter are expected from HMP. The focus of attention in this field is currently drawn to the protection of endothelial integrity by means of additives to the perfusion solution, improvement of the HMP solution, choice of temperature, duration of perfusion, and machine settings. HMP may offer the opportunity to assess aspects of graft viability before transplantation, which can potentially aid preselection of grafts based on characteristics such as perfusate biomarkers, as well as measurement of machine perfusion dynamics parameters. HMP has proven to be beneficial as a kidney preservation method for all types of renal grafts, most notably those retrieved from extended criteria donors. Large numbers of variables during HMP, such as duration, machine settings and additives to the perfusion solution are currently being investigated to improve renal function and graft survival. In addition, the search for biomarkers has become a focus of attention to predict graft function posttransplant.
Cohen, Kevin Bretonnel; Glass, Benjamin; Greiner, Hansel M.; Holland-Bouley, Katherine; Standridge, Shannon; Arya, Ravindra; Faist, Robert; Morita, Diego; Mangano, Francesco; Connolly, Brian; Glauser, Tracy; Pestian, John
2016-01-01
Objective: We describe the development and evaluation of a system that uses machine learning and natural language processing techniques to identify potential candidates for surgical intervention for drug-resistant pediatric epilepsy. The data are comprised of free-text clinical notes extracted from the electronic health record (EHR). Both known clinical outcomes from the EHR and manual chart annotations provide gold standards for the patient’s status. The following hypotheses are then tested: 1) machine learning methods can identify epilepsy surgery candidates as well as physicians do and 2) machine learning methods can identify candidates earlier than physicians do. These hypotheses are tested by systematically evaluating the effects of the data source, amount of training data, class balance, classification algorithm, and feature set on classifier performance. The results support both hypotheses, with F-measures ranging from 0.71 to 0.82. The feature set, classification algorithm, amount of training data, class balance, and gold standard all significantly affected classification performance. It was further observed that classification performance was better than the highest agreement between two annotators, even at one year before documented surgery referral. The results demonstrate that such machine learning methods can contribute to predicting pediatric epilepsy surgery candidates and reducing lag time to surgery referral. PMID:27257386
Utilization of low temperatures in electrical machines
NASA Astrophysics Data System (ADS)
Kwasniewska-Jankowicz, L.; Mirski, Z.
1983-09-01
The dimensions of conventional and superconducting direct and alternating current generators are compared and the advantages of using superconducting magnets are examined. The critical temperature, critical current, and critical magnetic field intensity of superconductors in an induction winding are discussed as well as the mechanical properties needed for bending connectors at small radii. Investigations of cryogenic cooling, cryostats, thermal insulation and rotary seals are reported as well as results of studies of the mechanical properties of austenitic Cr-Ni steels, welded joints and plastics for insulation.
Needle bar for warp knitting machines
Hagel, Adolf; Thumling, Manfred
1979-01-01
Needle bar for warp knitting machines with a number of needles individually set into slits of the bar and having shafts cranked to such an extent that the head section of each needle is in alignment with the shaft section accommodated by the slit. Slackening of the needles will thus not influence the needle spacing.
Detection of distorted frames in retinal video-sequences via machine learning
NASA Astrophysics Data System (ADS)
Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.
2017-07-01
This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.
ERIC Educational Resources Information Center
Hill, Janet W.; And Others
1982-01-01
The study demonstrated the acquisition and generalization into community settings of a chronologically age-appropriate leisure skill with three severely and profoundly mentally retarded adolescents. Results indicated that participants could acquire and generalize use of an electronic pinball machine leisure skill effectively and learn to exhibit…
Precision Machining Technology. Technical Committee Report.
ERIC Educational Resources Information Center
Idaho State Dept. of Education, Boise. Div. of Vocational Education.
This Technical Committee Report prepared by industry representatives in Idaho lists the skills currently necessary for an employee in that state to obtain a job in precision machining technology, retain a job once hired, and advance in that occupational field. (Task lists are grouped according to duty areas generally used in industry settings, and…
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
Analysis of a Multi-Machine Database on Divertor Heat Fluxes
NASA Astrophysics Data System (ADS)
Makowski, M. A.
2011-10-01
A coordinated effort to measure divertor heat flux characteristics in fully attached, similarly shaped H-mode plasmas on C-Mod, DIII-D and NSTX was carried out in 2010 in order to construct a predictive scaling relation applicable to next step devices including ITER, FNSF, and DEMO. Few published scaling laws are available and those that have been published were obtained under widely varying conditions and divertor geometries, leading to conflicting predictions for this critically important quantity. This study was designed to overcome these deficiencies. Corresponding plasma parameters were systematically varied in each tokamak, resulting in a combined data set in which Ip varies by a factor 3, Bt varies by a factor of 14.5, and major radius varies by a factor of 2.6. The derived scaling relation consistently predicts narrower heat flux widths than relations currently in use. Analysis of the combined data set reveals that the primary dependence of the parallel heat flux width is robustly inverse with Ip. All three tokamaks independently demonstrate this dependence. The midplane SOL profiles in DIII-D are also found to steepen with higher Ip, similar to the divertor heat flux profiles. Weaker dependencies on the toroidal field and normalized Greenwald density, fGW, are also found, but vary across devices and with the measure of the heat flux width used, either FWHM or integral width. In the combined data set, the strongest size scaling is with minor radius resulting in an approximately linear dependence on a /Ip . This suggests a scaling correlated with the inverse of the poloidal field, as would be expected for critical gradient or drift-based transport. Supported by the US DOE under DE-AC52-07NA27344 and DE-FC02-04ER54698.
Classification of AB O 3 perovskite solids: a machine learning study
Pilania, G.; Balachandran, P. V.; Gubernatis, J. E.; ...
2015-07-23
Here we explored the use of machine learning methods for classifying whether a particularABO 3chemistry forms a perovskite or non-perovskite structured solid. Starting with three sets of feature pairs (the tolerance and octahedral factors, theAandBionic radii relative to the radius of O, and the bond valence distances between theAandBions from the O atoms), we used machine learning to create a hyper-dimensional partial dependency structure plot using all three feature pairs or any two of them. Doing so increased the accuracy of our predictions by 2–3 percentage points over using any one pair. We also included the Mendeleev numbers of theAandBatomsmore » to this set of feature pairs. Moreover, doing this and using the capabilities of our machine learning algorithm, the gradient tree boosting classifier, enabled us to generate a new type of structure plot that has the simplicity of one based on using just the Mendeleev numbers, but with the added advantages of having a higher accuracy and providing a measure of likelihood of the predicted structure.« less
NASA Astrophysics Data System (ADS)
Maity, Kalipada; Pradhan, Swastik
2018-04-01
In this study, machining of titanium alloy (grade 5) is carried out using MT-CVD coated cutting tool. Titanium alloys possess superior strength-to-weight ratio with good corrosion resistance. Most of the industries used titanium alloy for the manufacturing of various types of lightweight components. The parts made from Ti-6Al-4V largely used in aerospace, biomedical, automotive and marine sectors. The conventional machining of this material is very difficult, due to low thermal conductivity and high chemical reactivity properties. To achieve a good surface finish with minimum tool wear of cutting tool, the machining is carried out using MT-CVD coated cutting tool. The experiment is carried out using of Taguchi L27 array layout with three cutting variables and levels. To find out the optimum parametric setting desirability function analysis (DFA) approach is used. The analysis of variance is studied to know the percentage contribution of each cutting variables. The optimum parametric setting results calculated from DFA were validated through the confirmation test.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Perspex machine: V. Compilation of C programs
NASA Astrophysics Data System (ADS)
Spanner, Matthew P.; Anderson, James A. D. W.
2006-01-01
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Machine Learning and Radiology
Wang, Shijun; Summers, Ronald M.
2012-01-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Zhang, YI; Chen, Jui-Sheng
1991-01-01
Research was performed to develop a computer program that will: (1) simulate the meshing and bearing contact for face milled spiral beval gears with given machine tool settings; and (2) to obtain the output, some of the data is required for hydrodynamic analysis. It is assumed that the machine tool settings and the blank data will be taken from the Gleason summaries. The theoretical aspects of the program are based on 'Local Synthesis and Tooth Contact Analysis of Face Mill Milled Spiral Bevel Gears'. The difference between the computer programs developed herein and the other one is as follows: (1) the mean contact point of tooth surfaces for gears with given machine tool settings must be determined iteratively, while parameters (H and V) are changed (H represents displacement along the pinion axis, V represents the gear displacement that is perpendicular to the plane drawn through the axes of the pinion and the gear of their initial positions), this means that when V differs from zero, the axis of the pionion and the gear are crossed but not intersected; (2) in addition to the regular output data (transmission errors and bearing contact), the new computer program provides information about the contacting force for each contact point and the sliding and the so-called rolling velocity. The following topics are covered: (1) instructions for the users as to how to insert the input data; (2) explanations regarding the output data; (3) numerical example; and (4) listing of the program.
Bucak, Ihsan Ömür
2010-01-01
In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion.
Bucak, İhsan Ömür
2010-01-01
In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion. PMID:22294906
Process capability improvement through DMAIC for aluminum alloy wheel machining
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, P. Srinivasa; Babu, B. Surendra
2017-07-01
This paper first enlists the generic problems of alloy wheel machining and subsequently details on the process improvement of the identified critical-to-quality machining characteristic of A356 aluminum alloy wheel machining process. The causal factors are traced using the Ishikawa diagram and prioritization of corrective actions is done through process failure modes and effects analysis. Process monitoring charts are employed for improving the process capability index of the process, at the industrial benchmark of four sigma level, which is equal to the value of 1.33. The procedure adopted for improving the process capability levels is the define-measure-analyze-improve-control (DMAIC) approach. By following the DMAIC approach, the C p, C pk and C pm showed signs of improvement from an initial value of 0.66, -0.24 and 0.27, to a final value of 4.19, 3.24 and 1.41, respectively.
Machine learning of network metrics in ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration
2017-10-01
The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.
Critical care nurses' experiences when technology malfunctions.
Haghenbeck, Karen Toby
2005-01-01
When caring for critically ill patients, critical care nurses work with technology every day. Technology and equipment malfunctions can have a profound effect on nurses' practice and self-image. In this article, a descriptive phenomenological methodology was chosen to explicate the experience of seven critical care nurses. While participants realized that machines might malfunction, they experienced surprise, shock, and feelings of being "let down" and inadequate when malfunctions occurred. They questioned their competence and felt malfunctioning technology jeopardized their credibility and professional image. These findings are useful when structuring educational sessions on technology and in facilitating a supportive environment for critical care nurses when technology malfunctions.
Government review of the Mod-2 wind turbine (as-built)
NASA Technical Reports Server (NTRS)
Johnson, W. R.; Birchenough, A. G.; Linscott, B. S.; Reagan, J. R.; Sirocky, P. J.; Sizemore, R. L.; Sullivan, T. L.; Holeman, R. H.
1985-01-01
The findings and recommendations of the Government committee formed to conduct an as-built review of the three Mod-2 wind turbine units at Goldendale, Washington are given. The purpose of the review was to identify any critical deficiencies in machine components that could result in failure, and to recommend any necessary corrective action before resuming safe machine operation. The review concluded that one of the deficiencies identified would preclude planned attended or unattended operation, provided that certain corrective actions were implemented.
NASA Technical Reports Server (NTRS)
Schreiber, Robert; Simon, Horst D.
1992-01-01
We are surveying current projects in the area of parallel supercomputers. The machines considered here will become commercially available in the 1990 - 1992 time frame. All are suitable for exploring the critical issues in applying parallel processors to large scale scientific computations, in particular CFD calculations. This chapter presents an overview of the surveyed machines, and a detailed analysis of the various architectural and technology approaches taken. Particular emphasis is placed on the feasibility of a Teraflops capability following the paths proposed by various developers.
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
Stacked Denoising Autoencoders Applied to Star/Galaxy Classification
NASA Astrophysics Data System (ADS)
Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi
2017-04-01
In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.
NASA Astrophysics Data System (ADS)
Khidhir, Basim A.; Mohamed, Bashir
2011-02-01
Machining parameters has an important factor on tool wear and surface finish, for that the manufacturers need to obtain optimal operating parameters with a minimum set of experiments as well as minimizing the simulations in order to reduce machining set up costs. The cutting speed is one of the most important cutting parameter to evaluate, it clearly most influences on one hand, tool life, tool stability, and cutting process quality, and on the other hand controls production flow. Due to more demanding manufacturing systems, the requirements for reliable technological information have increased. For a reliable analysis in cutting, the cutting zone (tip insert-workpiece-chip system) as the mechanics of cutting in this area are very complicated, the chip is formed in the shear plane (entrance the shear zone) and is shape in the sliding plane. The temperature contributed in the primary shear, chamfer and sticking, sliding zones are expressed as a function of unknown shear angle on the rake face and temperature modified flow stress in each zone. The experiments were carried out on a CNC lathe and surface finish and tool tip wear are measured in process. Machining experiments are conducted. Reasonable agreement is observed under turning with high depth of cut. Results of this research help to guide the design of new cutting tool materials and the studies on evaluation of machining parameters to further advance the productivity of nickel based alloy Hastelloy - 276 machining.
Law machines: scale models, forensic materiality and the making of modern patent law.
Pottage, Alain
2011-10-01
Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
FLUXCOM - Overview and First Synthesis
NASA Astrophysics Data System (ADS)
Jung, M.; Ichii, K.; Tramontana, G.; Camps-Valls, G.; Schwalm, C. R.; Papale, D.; Reichstein, M.; Gans, F.; Weber, U.
2015-12-01
We present a community effort aiming at generating an ensemble of global gridded flux products by upscaling FLUXNET data using an array of different machine learning methods including regression/model tree ensembles, neural networks, and kernel machines. We produced products for gross primary production, terrestrial ecosystem respiration, net ecosystem exchange, latent heat, sensible heat, and net radiation for two experimental protocols: 1) at a high spatial and 8-daily temporal resolution (5 arc-minute) using only remote sensing based inputs for the MODIS era; 2) 30 year records of daily, 0.5 degree spatial resolution by incorporating meteorological driver data. Within each set-up, all machine learning methods were trained with the same input data for carbon and energy fluxes respectively. Sets of input driver variables were derived using an extensive formal variable selection exercise. The performance of the extrapolation capacities of the approaches is assessed with a fully internally consistent cross-validation. We perform cross-consistency checks of the gridded flux products with independent data streams from atmospheric inversions (NEE), sun-induced fluorescence (GPP), catchment water balances (LE, H), satellite products (Rn), and process-models. We analyze the uncertainties of the gridded flux products and for example provide a breakdown of the uncertainty of mean annual GPP originating from different machine learning methods, different climate input data sets, and different flux partitioning methods. The FLUXCOM archive will provide an unprecedented source of information for water, energy, and carbon cycle studies.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Machine-learning-based real-bogus system for the HSC-SSP moving object detection pipeline
NASA Astrophysics Data System (ADS)
Lin, Hsing-Wen; Chen, Ying-Tung; Wang, Jen-Hung; Wang, Shiang-Yu; Yoshida, Fumi; Ip, Wing-Huen; Miyazaki, Satoshi; Terai, Tsuyoshi
2018-01-01
Machine-learning techniques are widely applied in many modern optical sky surveys, e.g., Pan-STARRS1, PTF/iPTF, and the Subaru/Hyper Suprime-Cam survey, to reduce human intervention in data verification. In this study, we have established a machine-learning-based real-bogus system to reject false detections in the Subaru/Hyper-Suprime-Cam Strategic Survey Program (HSC-SSP) source catalog. Therefore, the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use stationary sources as the real training set and "flagged" data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ˜96% with a false positive rate (fpr) ˜1% or tpr ˜99% at fpr ˜5%. Therefore, we conclude that stationary sources are decent real training samples, and using photometry measurements and shape moments can reject false positives effectively.
Combined passive bearing element/generator motor
Post, Richard F.
2000-01-01
An electric machine includes a cylindrical rotor made up of an array of permanent magnets that provide a N-pole magnetic field of even order (where N=4, 6, 8, etc.). This array of permanent magnets has bars of identical permanent magnets made of dipole elements where the bars are assembled in a circle. A stator inserted down the axis of the dipole field is made of two sets of windings that are electrically orthogonal to each other, where one set of windings provides stabilization of the stator and the other set of windings couples to the array of permanent magnets and acts as the windings of a generator/motor. The rotor and the stator are horizontally disposed, and the rotor is on the outside of said stator. The electric machine may also include two rings of ferromagnetic material. One of these rings would be located at each end of the rotor. Two levitator pole assemblies are attached to a support member that is external to the electric machine. These levitator pole assemblies interact attractively with the rings of ferromagnetic material to produce a levitating force upon the rotor.
Using machine learning and quantum chemistry descriptors to predict the toxicity of ionic liquids.
Cao, Lingdi; Zhu, Peng; Zhao, Yongsheng; Zhao, Jihong
2018-06-15
Large-scale application of ionic liquids (ILs) hinges on the advancement of designable and eco-friendly nature. Research of the potential toxicity of ILs towards different organisms and trophic levels is insufficient. Quantitative structure-activity relationships (QSAR) model is applied to evaluate the toxicity of ILs towards the leukemia rat cell line (ICP-81). The structures of 57 cations and 21 anions were optimized by quantum chemistry. The electrostatic potential surface area (S EP ) and charge distribution area (S σ-profile ) descriptors are calculated and used to predict the toxicity of ILs. The performance and predictive aptitude of extreme learning machine (ELM) model are analyzed and compared with those of multiple linear regression (MLR) and support vector machine (SVM) models. The highest R 2 and the lowest AARD% and RMSE of the training set, test set and total set for the ELM are observed, which validates the superior performance of the ELM than that of obtained by the MLR and SVM. The applicability domain of the model is assessed by the Williams plot. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
Optimal quantum cloning based on the maximin principle by using a priori information
NASA Astrophysics Data System (ADS)
Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming
2016-10-01
We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.
A superconducting homopolar motor and generator—new approaches
NASA Astrophysics Data System (ADS)
Fuger, Rene; Matsekh, Arkadiy; Kells, John; Sercombe, D. B. T.; Guina, Ante
2016-03-01
Homopolar machines were the first continuously running electromechanical converters ever demonstrated but engineering challenges and the rapid development of AC technology prevented wider commercialisation. Recent developments in superconducting, cryogenic and sliding contact technology together with new areas of application have led to a renewed interest in homopolar machines. Some of the advantages of these machines are ripple free constant torque, pure DC operation, high power-to-weight ratio and that rotating magnets or coils are not required. In this paper we present our unique approach to high power and high torque homopolar electromagnetic turbines using specially designed high field superconducting magnets and liquid metal current collectors. The unique arrangement of the superconducting coils delivers a high static drive field as well as effective shielding for the field critical sliding contacts. The novel use of additional shielding coils reduces weight and stray field of the system. Liquid metal current collectors deliver a low resistance, stable and low maintenance sliding contact by using a thin liquid metal layer that fills a circular channel formed by the moving edge of a rotor and surrounded by a conforming stationary channel of the stator. Both technologies are critical to constructing high performance machines. Homopolar machines are pure DC devices that utilise only DC electric and magnetic fields and have no AC losses in the coils or the supporting structure. Guina Energy Technologies has developed, built and tested different motor and generator concepts over the last few years and has combined its experience to develop a new generation of homopolar electromagnetic turbines. This paper summarises the development process, general design parameters and first test results of our high temperature superconducting test motor.
In silico prediction of ROCK II inhibitors by different classification approaches.
Cai, Chuipu; Wu, Qihui; Luo, Yunxia; Ma, Huili; Shen, Jiangang; Zhang, Yongbin; Yang, Lei; Chen, Yunbo; Wen, Zehuai; Wang, Qi
2017-11-01
ROCK II is an important pharmacological target linked to central nervous system disorders such as Alzheimer's disease. The purpose of this research is to generate ROCK II inhibitor prediction models by machine learning approaches. Firstly, four sets of descriptors were calculated with MOE 2010 and PaDEL-Descriptor, and optimized by F-score and linear forward selection methods. In addition, four classification algorithms were used to initially build 16 classifiers with k-nearest neighbors [Formula: see text], naïve Bayes, Random forest, and support vector machine. Furthermore, three sets of structural fingerprint descriptors were introduced to enhance the predictive capacity of classifiers, which were assessed with fivefold cross-validation, test set validation and external test set validation. The best two models, MFK + MACCS and MLR + SubFP, have both MCC values of 0.925 for external test set. After that, a privileged substructure analysis was performed to reveal common chemical features of ROCK II inhibitors. Finally, binding modes were analyzed to identify relationships between molecular descriptors and activity, while main interactions were revealed by comparing the docking interaction of the most potent and the weakest ROCK II inhibitors. To the best of our knowledge, this is the first report on ROCK II inhibitors utilizing machine learning approaches that provides a new method for discovering novel ROCK II inhibitors.
sw-SVM: sensor weighting support vector machines for EEG-based brain-computer interfaces.
Jrad, N; Congedo, M; Phlypo, R; Rousseau, S; Flamary, R; Yger, F; Rakotomamonjy, A
2011-10-01
In many machine learning applications, like brain-computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.
Machine vision inspection of railroad track
DOT National Transportation Integrated Search
2011-01-10
North American Railways and the United States Department of Transportation : (US DOT) Federal Railroad Administration (FRA) require periodic inspection of railway : infrastructure to ensure the safety of railway operation. This inspection is a critic...
RIP-REMOTE INTERACTIVE PARTICLE-TRACER
NASA Technical Reports Server (NTRS)
Rogers, S. E.
1994-01-01
Remote Interactive Particle-tracing (RIP) is a distributed-graphics program which computes particle traces for computational fluid dynamics (CFD) solution data sets. A particle trace is a line which shows the path a massless particle in a fluid will take; it is a visual image of where the fluid is going. The program is able to compute and display particle traces at a speed of about one trace per second because it runs on two machines concurrently. The data used by the program is contained in two files. The solution file contains data on density, momentum and energy quantities of a flow field at discrete points in three-dimensional space, while the grid file contains the physical coordinates of each of the discrete points. RIP requires two computers. A local graphics workstation interfaces with the user for program control and graphics manipulation, and a remote machine interfaces with the solution data set and performs time-intensive computations. The program utilizes two machines in a distributed mode for two reasons. First, the data to be used by the program is usually generated on the supercomputer. RIP avoids having to convert and transfer the data, eliminating any memory limitations of the local machine. Second, as computing the particle traces can be computationally expensive, RIP utilizes the power of the supercomputer for this task. Although the remote site code was developed on a CRAY, it is possible to port this to any supercomputer class machine with a UNIX-like operating system. Integration of a velocity field from a starting physical location produces the particle trace. The remote machine computes the particle traces using the particle-tracing subroutines from PLOT3D/AMES, a CFD post-processing graphics program available from COSMIC (ARC-12779). These routines use a second-order predictor-corrector method to integrate the velocity field. Then the remote program sends graphics tokens to the local machine via a remote-graphics library. The local machine interprets the graphics tokens and draws the particle traces. The program is menu driven. RIP is implemented on the silicon graphics IRIS 3000 (local workstation) with an IRIX operating system and on the CRAY2 (remote station) with a UNICOS 1.0 or 2.0 operating system. The IRIS 4D can be used in place of the IRIS 3000. The program is written in C (67%) and FORTRAN 77 (43%) and has an IRIS memory requirement of 4 MB. The remote and local stations must use the same user ID. PLOT3D/AMES unformatted data sets are required for the remote machine. The program was developed in 1988.
Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.
Burstein, David; Zusman, Tal; Degtyar, Elena; Viner, Ram; Segal, Gil; Pupko, Tal
2009-07-01
A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF) was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine learning algorithms for the identification and characterization of bacterial pathogenesis determinants.
NASA Astrophysics Data System (ADS)
Digney, Bruce L.
2007-04-01
Unmanned vehicle systems is an attractive technology for the military, but whose promises have remained largely undelivered. There currently exist fielded remote controlled UGVs and high altitude UAV whose benefits are based on standoff in low complexity environments with sufficiently low control reaction time requirements to allow for teleoperation. While effective within there limited operational niche such systems do not meet with the vision of future military UxV scenarios. Such scenarios envision unmanned vehicles operating effectively in complex environments and situations with high levels of independence and effective coordination with other machines and humans pursing high level, changing and sometimes conflicting goals. While these aims are clearly ambitious they do provide necessary targets and inspiration with hopes of fielding near term useful semi-autonomous unmanned systems. Autonomy involves many fields of research including machine vision, artificial intelligence, control theory, machine learning and distributed systems all of which are intertwined and have goals of creating more versatile broadly applicable algorithms. Cohort is a major Applied Research Program (ARP) led by Defence R&D Canada (DRDC) Suffield and its aim is to develop coordinated teams of unmanned vehicles (UxVs) for urban environments. This paper will discuss the critical science being addressed by DRDC developing semi-autonomous systems.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
An assessment of support vector machines for land cover classification
Huang, C.; Davis, L.S.; Townshend, J.R.G.
2002-01-01
The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.
An empirically based steady state friction law and implications for fault stability
Nielsen, S.; Violay, M.; Di Toro, G.
2016-01-01
Abstract Empirically based rate‐and‐state friction laws (RSFLs) have been proposed to model the dependence of friction forces with slip and time. The relevance of the RSFL for earthquake mechanics is that few constitutive parameters define critical conditions for fault stability (i.e., critical stiffness and frictional fault behavior). However, the RSFLs were determined from experiments conducted at subseismic slip rates (V < 1 cm/s), and their extrapolation to earthquake deformation conditions (V > 0.1 m/s) remains questionable on the basis of the experimental evidence of (1) large dynamic weakening and (2) activation of particular fault lubrication processes at seismic slip rates. Here we propose a modified RSFL (MFL) based on the review of a large published and unpublished data set of rock friction experiments performed with different testing machines. The MFL, valid at steady state conditions from subseismic to seismic slip rates (0.1 µm/s < V < 3 m/s), describes the initiation of a substantial velocity weakening in the 1–20 cm/s range resulting in a critical stiffness increase that creates a peak of potential instability in that velocity regime. The MFL leads to a new definition of fault frictional stability with implications for slip event styles and relevance for models of seismic rupture nucleation, propagation, and arrest. PMID:27667875
Model and experiments to optimize co-adaptation in a simplified myoelectric control system.
Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A
2018-04-01
To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Application of Machine Learning to Rotorcraft Health Monitoring
NASA Technical Reports Server (NTRS)
Cody, Tyler; Dempsey, Paula J.
2017-01-01
Machine learning is a powerful tool for data exploration and model building with large data sets. This project aimed to use machine learning techniques to explore the inherent structure of data from rotorcraft gear tests, relationships between features and damage states, and to build a system for predicting gear health for future rotorcraft transmission applications. Classical machine learning techniques are difficult, if not irresponsible to apply to time series data because many make the assumption of independence between samples. To overcome this, Hidden Markov Models were used to create a binary classifier for identifying scuffing transitions and Recurrent Neural Networks were used to leverage long distance relationships in predicting discrete damage states. When combined in a workflow, where the binary classifier acted as a filter for the fatigue monitor, the system was able to demonstrate accuracy in damage state prediction and scuffing identification. The time dependent nature of the data restricted data exploration to collecting and analyzing data from the model selection process. The limited amount of available data was unable to give useful information, and the division of training and testing sets tended to heavily influence the scores of the models across combinations of features and hyper-parameters. This work built a framework for tracking scuffing and fatigue on streaming data and demonstrates that machine learning has much to offer rotorcraft health monitoring by using Bayesian learning and deep learning methods to capture the time dependent nature of the data. Suggested future work is to implement the framework developed in this project using a larger variety of data sets to test the generalization capabilities of the models and allow for data exploration.
Nandi, Sutanu; Subramanian, Abhishek; Sarkar, Ram Rup
2017-07-25
Prediction of essential genes helps to identify a minimal set of genes that are absolutely required for the appropriate functioning and survival of a cell. The available machine learning techniques for essential gene prediction have inherent problems, like imbalanced provision of training datasets, biased choice of the best model for a given balanced dataset, choice of a complex machine learning algorithm, and data-based automated selection of biologically relevant features for classification. Here, we propose a simple support vector machine-based learning strategy for the prediction of essential genes in Escherichia coli K-12 MG1655 metabolism that integrates a non-conventional combination of an appropriate sample balanced training set, a unique organism-specific genotype, phenotype attributes that characterize essential genes, and optimal parameters of the learning algorithm to generate the best machine learning model (the model with the highest accuracy among all the models trained for different sample training sets). For the first time, we also introduce flux-coupled metabolic subnetwork-based features for enhancing the classification performance. Our strategy proves to be superior as compared to previous SVM-based strategies in obtaining a biologically relevant classification of genes with high sensitivity and specificity. This methodology was also trained with datasets of other recent supervised classification techniques for essential gene classification and tested using reported test datasets. The testing accuracy was always high as compared to the known techniques, proving that our method outperforms known methods. Observations from our study indicate that essential genes are conserved among homologous bacterial species, demonstrate high codon usage bias, GC content and gene expression, and predominantly possess a tendency to form physiological flux modules in metabolism.
Davison, James A
2015-01-01
To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.
Support vector machine for automatic pain recognition
NASA Astrophysics Data System (ADS)
Monwar, Md Maruf; Rezaei, Siamak
2009-02-01
Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-12-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
NASA Astrophysics Data System (ADS)
Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.
Graph theory for feature extraction and classification: a migraine pathology case study.
Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya
2014-01-01
Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.
Machine Learning for Treatment Assignment: Improving Individualized Risk Attribution
Weiss, Jeremy; Kuusisto, Finn; Boyd, Kendrick; Liu, Jie; Page, David
2015-01-01
Clinical studies model the average treatment effect (ATE), but apply this population-level effect to future individuals. Due to recent developments of machine learning algorithms with useful statistical guarantees, we argue instead for modeling the individualized treatment effect (ITE), which has better applicability to new patients. We compare ATE-estimation using randomized and observational analysis methods against ITE-estimation using machine learning, and describe how the ITE theoretically generalizes to new population distributions, whereas the ATE may not. On a synthetic data set of statin use and myocardial infarction (MI), we show that a learned ITE model improves true ITE estimation and outperforms the ATE. We additionally argue that ITE models should be learned with a consistent, nonparametric algorithm from unweighted examples and show experiments in favor of our argument using our synthetic data model and a real data set of D-penicillamine use for primary biliary cirrhosis. PMID:26958271
Machine Learning for Treatment Assignment: Improving Individualized Risk Attribution.
Weiss, Jeremy; Kuusisto, Finn; Boyd, Kendrick; Liu, Jie; Page, David
2015-01-01
Clinical studies model the average treatment effect (ATE), but apply this population-level effect to future individuals. Due to recent developments of machine learning algorithms with useful statistical guarantees, we argue instead for modeling the individualized treatment effect (ITE), which has better applicability to new patients. We compare ATE-estimation using randomized and observational analysis methods against ITE-estimation using machine learning, and describe how the ITE theoretically generalizes to new population distributions, whereas the ATE may not. On a synthetic data set of statin use and myocardial infarction (MI), we show that a learned ITE model improves true ITE estimation and outperforms the ATE. We additionally argue that ITE models should be learned with a consistent, nonparametric algorithm from unweighted examples and show experiments in favor of our argument using our synthetic data model and a real data set of D-penicillamine use for primary biliary cirrhosis.
Detection of Cheating by Decimation Algorithm
NASA Astrophysics Data System (ADS)
Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien
2015-02-01
We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine
Yuan, Hua; Huang, Jianping; Cao, Chenzhong
2009-01-01
Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
Informing the Human Plasma Protein Binding of ...
The free fraction of a xenobiotic in plasma (Fub) is an important determinant of chemical adsorption, distribution, metabolism, elimination, and toxicity, yet experimental plasma protein binding data is scarce for environmentally relevant chemicals. The presented work explores the merit of utilizing available pharmaceutical data to predict Fub for environmentally relevant chemicals via machine learning techniques. Quantitative structure-activity relationship (QSAR) models were constructed with k nearest neighbors (kNN), support vector machines (SVM), and random forest (RF) machine learning algorithms from a training set of 1045 pharmaceuticals. The models were then evaluated with independent test sets of pharmaceuticals (200 compounds) and environmentally relevant ToxCast chemicals (406 total, in two groups of 238 and 168 compounds). The selection of a minimal feature set of 10-15 2D molecular descriptors allowed for both informative feature interpretation and practical applicability domain assessment via a bounded box of descriptor ranges and principal component analysis. The diverse pharmaceutical and environmental chemical sets exhibit similarities in terms of chemical space (99-82% overlap), as well as comparable bias and variance in constructed learning curves. All the models exhibit significant predictability with mean absolute errors (MAE) in the range of 0.10-0.18 Fub. The models performed best for highly bound chemicals (MAE 0.07-0.12), neutrals (MAE 0
Arruti, Andoni; Cearreta, Idoia; Álvarez, Aitor; Lazkano, Elena; Sierra, Basilio
2014-01-01
Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested. PMID:25279686
Analysis of a multi-machine database on divertor heat fluxesa)
NASA Astrophysics Data System (ADS)
Makowski, M. A.; Elder, D.; Gray, T. K.; LaBombard, B.; Lasnier, C. J.; Leonard, A. W.; Maingi, R.; Osborne, T. H.; Stangeby, P. C.; Terry, J. L.; Watkins, J.
2012-05-01
A coordinated effort to measure divertor heat flux characteristics in fully attached, similarly shaped H-mode plasmas on C-Mod, DIII-D, and NSTX was carried out in 2010 in order to construct a predictive scaling relation applicable to next step devices including ITER, FNSF, and DEMO. Few published scaling laws are available and those that have been published were obtained under widely varying conditions and divertor geometries, leading to conflicting predictions for this critically important quantity. This study was designed to overcome these deficiencies. Analysis of the combined data set reveals that the primary dependence of the parallel heat flux width is robustly inverse with Ip, which all three tokamaks independently demonstrate. An improved Thomson scattering system on DIII-D has yielded very accurate scrape off layer (SOL) profile measurements from which tests of parallel transport models have been made. It is found that a flux-limited model agrees best with the data at all collisionalities, while a Spitzer resistivity model agrees at higher collisionality where it is more valid. The SOL profile measurements and divertor heat flux scaling are consistent with a heuristic drift based model as well as a critical gradient model.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Kavi, Srinu
1984-01-01
This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report.