2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
NASA Astrophysics Data System (ADS)
Sharif, Safian; Sadiq, Ibrahim Ogu; Suhaimi, Mohd Azlan; Rahim, Shayfull Zamree Abd
2017-09-01
Pollution related activities in addition to handling cost of conventional cutting fluid application in metal cutting industry has generated a lot of concern over time. The desire for a green machining environment which will preserve the environment through reduction or elimination of machining related pollution, reduction in oil consumption and safety of the machine operators without compromising an efficient machining process led to search for alternatives to conventional cutting fluid. Amongst the alternatives of dry machining, cryogenic cooling, high pressure cooling, near dry or minimum quantity lubrication (MQL), MQL have shown remarkable performance in terms of cost, machining output, safety of environment and machine operators. However, the MQL under aggressive machining or very high speed machining pose certain restriction as the lubrication media cannot perform efficiently at elevated temperature. In compensating for the shortcomings of MQL technique, high thermal conductivity nanoparticles are introduced in cutting fluids for use in the MQL lubrication process. They have indicated enhanced performance of machining process and significant reduction of loads on the environment. The present work is aimed at evaluating the application and performance of nanofluid in metal cutting process through MQL lubrication technique highlighting their impacts and prospects as lubrication strategy in metal cutting process for sustainable green manufacturing. Enhanced performance of vegetable oil based nanofluids over mineral oil-based nanofluids have been reported and thus highlighted.
Micro-optical fabrication by ultraprecision diamond machining and precision molding
NASA Astrophysics Data System (ADS)
Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.
2017-06-01
Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.
Investigations on high speed machining of EN-353 steel alloy under different machining environments
NASA Astrophysics Data System (ADS)
Venkata Vishnu, A.; Jamaleswara Kumar, P.
2018-03-01
The addition of Nano Particles into conventional cutting fluids enhances its cooling capabilities; in the present paper an attempt is made by adding nano sized particles into conventional cutting fluids. Taguchi Robust Design Methodology is employed in order to study the performance characteristics of different turning parameters i.e. cutting speed, feed rate, depth of cut and type of tool under different machining environments i.e. dry machining, machining with lubricant - SAE 40 and machining with mixture of nano sized particles of Boric acid and base fluid SAE 40. A series of turning operations were performed using L27 (3)13 orthogonal array, considering high cutting speeds and the other machining parameters to measure hardness. The results are compared among the different machining environments, and it is concluded that there is considerable improvement in the machining performance using lubricant SAE 40 and mixture of SAE 40 + boric acid compared with dry machining. The ANOVA suggests that the selected parameters and the interactions are significant and cutting speed has most significant effect on hardness.
Machine characterization based on an abstract high-level language machine
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene
1989-01-01
Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.
Early experiences in developing and managing the neuroscience gateway.
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T
2015-02-01
The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.
Early experiences in developing and managing the neuroscience gateway
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.
2015-01-01
SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124
Rotordynamic Instability Problems in High-Performance Turbomachinery
NASA Technical Reports Server (NTRS)
1984-01-01
Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.
High performance cutting of aircraft and turbine components
NASA Astrophysics Data System (ADS)
Krämer, A.; Lung, D.; Klocke, F.
2012-04-01
Titanium and nickel-based alloys belong to the group of difficult-to-cut materials. The machining of these high-temperature alloys is characterized by low productivity and low process stability as a result of their physical and mechanical properties. Major problems during the machining of these materials are low applicable cutting speeds due to excessive tool wear, long machining times, and thus high manufacturing costs, as well as the formation of ribbon and snarled chips. Under these conditions automation of the production process is limited. This paper deals with strategies to improve machinability of titanium and nickel-based alloys. Using the example of the nickel-based alloy Inconel 718 high performance cutting with advanced cutting materials, such as PCBN and cutting ceramics, is presented. Afterwards the influence of different cooling strategies, like high-pressure lubricoolant supply and cryogenic cooling, during machining of TiAl6V4 is shown.
A Double-Sided Linear Primary Permanent Magnet Vernier Machine
2015-01-01
The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed. PMID:25874250
A double-sided linear primary permanent magnet vernier machine.
Du, Yi; Zou, Chunhua; Liu, Xianxing
2015-01-01
The purpose of this paper is to present a new double-sided linear primary permanent magnet (PM) vernier (DSLPPMV) machine, which can offer high thrust force, low detent force, and improved power factor. Both PMs and windings of the proposed machine are on the short translator, while the long stator is designed as a double-sided simple iron core with salient teeth so that it is very robust to transmit high thrust force. The key of this new machine is the introduction of double stator and the elimination of translator yoke, so that the inductance and the volume of the machine can be reduced. Hence, the proposed machine offers improved power factor and thrust force density. The electromagnetic performances of the proposed machine are analyzed including flux, no-load EMF, thrust force density, and inductance. Based on using the finite element analysis, the characteristics and performances of the proposed machine are assessed.
Tool simplifies machining of pipe ends for precision welding
NASA Technical Reports Server (NTRS)
Matus, S. T.
1969-01-01
Single tool prepares a pipe end for precision welding by simultaneously performing internal machining, end facing, and bevel cutting to specification standards. The machining operation requires only one milling adjustment, can be performed quickly, and produces the high quality pipe-end configurations required to ensure precision-welded joints.
Machine Shop. Performance Objectives. Basic Course.
ERIC Educational Resources Information Center
Hilton, Arthur; Lambert, George
Several intermediate performance objectives and corresponding criterion measures are listed for each of 13 terminal objectives for a high school basic machine shop course. The materials were developed for a 36-week course (2 hours daily) designed to enable students to become familiar with the operation of machine shop equipment, to become familiar…
EDM machinability of SiCw/Al composites
NASA Technical Reports Server (NTRS)
Ramulu, M.; Taya, M.
1989-01-01
Machinability of high temperature composites was investigated. Target materials, 15 and 25 vol pct SiC whisker-2124 aluminum composites, were machined by electrodischarge sinker machining and diamond saw. The machined surfaces of these metal matrix composites were examined by SEM and profilometry to determine the surface finish. Microhardness measurements were also performed on the as-machined composites.
Statistical machine translation for biomedical text: are we there yet?
Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre
2011-01-01
In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Communication Studies of DMP and SMP Machines
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP-2 distributed-memory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication-efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in Message-Passing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP-2 is sensitive to message size but yields a much higher communication overlapping because of the communication co-processor. The PowerCHALLENGEarray is not highly sensitive to message size and yields a low communication overlapping. Bitonic sorting yields lower performance compared to FFT due to a smaller computation-to-communication ratio.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demerdash, N.A.; Nehl, T.W.; Nyamusa, T.A.
1985-08-01
Effects of high momentary overloads on the samarium-cobalt and strontium-ferrite permanent magnets and the magnetic field in electronically commutated brushless dc machines, as well as their impact on the associated machine parameters were studied. The effect of overload on the machine parameters, and subsequently on the machine system performance was also investigated. This was accomplished through the combined use of finite element analysis of the magnetic field in such machines, perturbation of the magnetic energies to determine machine inductances, and dynamic simulation of the performance of brushless dc machines, when energized from voltage source inverters. These effects were investigated throughmore » application of the above methods to two equivalent 15 hp brushless dc motors, one of which was built with samarium-cobalt magnets, while the other was built with strontium- ferrite magnets. For momentary overloads as high as 4.5 p.u. magnet flux reductions of 29% and 42% of the no load flux were obtained in the samarium-cobalt and strontiumferrite machines, respectively. Corresponding reductions in the line to line armature inductances of 52% and 46% of the no load values were reported for the samarium-cobalt and strontium-ferrite cases, respectively. The overload affected the profiles and magnitudes of armature induced back emfs. Subsequently, the effects of overload on machine parameters were found to have significant impact on the performance of the machine systems, where findings indicate that the samarium-cobalt unit is more suited for higher overload duties than the strontium-ferrite machine.« less
Superconductor Armature Winding for High Performance Electrical Machines
2016-12-05
Vol. 3, pp.489-507 [Kalsi1] S. S. Kalsi, ‘Superconducting Wind Turbine Generator Employing MgB2 Windings Both on Rotor and Stator’, IEEE Trans. on...Contract Number: N00014-‐14-‐1-‐0272 Contract Title: Superconductor armature winding for high performance electrical...an all-superconducting machine. Superconductor armature windings in electrical machines bring many design challenges that need to be addressed en
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Expanding the Scope of High-Performance Computing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uram, Thomas D.; Papka, Michael E.
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
7 CFR 3201.41 - Metalworking fluids.
Code of Federal Regulations, 2013 CFR
2013-01-01
... feedstock during grinding and machining operations involving unusually high temperatures or corrosion... prevention when applied to metal feedstock during normal grinding and machining operations. (iii) High... percent. (3) High performance soluble, semi-synthetic, and synthetic oils—40 percent. (c) Preference...
7 CFR 3201.41 - Metalworking fluids.
Code of Federal Regulations, 2012 CFR
2012-01-01
... feedstock during grinding and machining operations involving unusually high temperatures or corrosion... prevention when applied to metal feedstock during normal grinding and machining operations. (iii) High... percent. (3) High performance soluble, semi-synthetic, and synthetic oils—40 percent. (c) Preference...
7 CFR 3201.41 - Metalworking fluids.
Code of Federal Regulations, 2014 CFR
2014-01-01
... feedstock during grinding and machining operations involving unusually high temperatures or corrosion... prevention when applied to metal feedstock during normal grinding and machining operations. (iii) High... percent. (3) High performance soluble, semi-synthetic, and synthetic oils—40 percent. (c) Preference...
Cold machining of high density tungsten and other materials
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1969-01-01
Cold machining process, which uses a sub-zero refrigerated cutting fluid, is used for machining refractory or reactive metals and alloys. Special carbide tools for turning and drilling these alloys further improve the cutting performance.
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.
NASA Astrophysics Data System (ADS)
Robert-Perron, Etienne; Blais, Carl; Pelletier, Sylvain; Thomas, Yannig
2007-06-01
The green machining process is an interesting approach for solving the mediocre machining behavior of high-performance powder metallurgy (PM) steels. This process appears as a promising method for extending tool life and reducing machining costs. Recent improvements in binder/lubricant technologies have led to high green strength systems that enable green machining. So far, tool wear has been considered negligible when characterizing the machinability of green PM specimens. This inaccurate assumption may lead to the selection of suboptimum cutting conditions. The first part of this study involves the optimization of the machining parameters to minimize the effects of tool wear on the machinability in turning of green PM components. The second part of our work compares the sintered mechanical properties of components machined in green state with other machined after sintering.
Multiprocessor Z-Buffer Architecture for High-Speed, High Complexity Computer Image Generation.
1983-12-01
Oversampling 50 17. "Poking Through" Effects 51 18. Sampling Paths 52 19. Triangle Variables 54 20. Intelligent Tiling Algorithm 61 21. Tiler Functional Blocks...64 * 22. HSD Interface 65 23. Tiling Machine Setup 67 24. Tiling Machine 68 25. Tile Accumulate 69 26. A lx$ Sorting Machine 77 27. A 2x8 Sorting...Delay 227 87. Effect of Triangle Size on Tiler Throughput Rates 229 88. Tiling Machine Setup Stage Performance for Oversample Mode 234 89. Tiling
Machine-z: Rapid Machine-Learned Redshift Indicator for Swift Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
2016-01-01
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce 'machine-z', a redshift prediction algorithm and a 'high-z' classifier for Swift GRBs based on machine learning. Our method relies exclusively on canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve approximately 100 per cent recall. The most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.
NASA Technical Reports Server (NTRS)
Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.
Motion Simulation Analysis of Rail Weld CNC Fine Milling Machine
NASA Astrophysics Data System (ADS)
Mao, Huajie; Shu, Min; Li, Chao; Zhang, Baojun
CNC fine milling machine is a new advanced equipment of rail weld precision machining with high precision, high efficiency, low environmental pollution and other technical advantages. The motion performance of this machine directly affects its machining accuracy and stability, which makes it an important consideration for its design. Based on the design drawings, this article completed 3D modeling of 60mm/kg rail weld CNC fine milling machine by using Solidworks. After that, the geometry was imported into Adams to finish the motion simulation analysis. The displacement, velocity, angular velocity and some other kinematical parameters curves of the main components were obtained in the post-processing and these are the scientific basis for the design and development for this machine.
Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993
1994-11-01
A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,
Crowe, Simon F; Mahony, Kate; Jackson, Martin
2004-08-01
The purpose of the current study was to explore whether performance on standardised neuropsychological measures could predict functional ability with automated machines and services among people with an acquired brain injury (ABI). Participants were 45 individuals who met the criteria for mild, moderate or severe ABI and 15 control participants matched on demographic variables including age- and education. Each participant was required to complete a battery of neuropsychological tests, as well as performing three automated service delivery tasks: a transport automated ticketing machine, an automated teller machine (ATM) and an automated telephone service. The results showed consistently high relationship between the neuropsychological measures, both as single predictors and in combination, and level of competency with the automated machines. Automated machines are part of a relatively new phenomena in service delivery and offer an ecologically valid functional measure of performance that represents a true indication of functional disability.
Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou
2015-09-21
Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.
High efficiency machining technology and equipment for edge chamfer of KDP crystals
NASA Astrophysics Data System (ADS)
Chen, Dongsheng; Wang, Baorui; Chen, Jihong
2016-10-01
Potassium dihydrogen phosphate (KDP) is a type of nonlinear optical crystal material. To Inhibit the transverse stimulated Raman scattering of laser beam and then enhance the optical performance of the optics, the edges of the large-sized KDP crystal needs to be removed to form chamfered faces with high surface quality (RMS<5 nm). However, as the depth of cut (DOC) of fly cutting is usually several, its machining efficiency is too low to be accepted for chamfering of the KDP crystal as the amount of materials to be removed is in the order of millimeter. This paper proposes a novel hybrid machining method, which combines precision grinding with fly cutting, for crackless and high efficiency chamfer of KDP crystal. A specialized machine tool, which adopts aerostatic bearing linear slide and aerostatic bearing spindle, was developed for chamfer of the KDP crystal. The aerostatic bearing linear slide consists of an aerostatic bearing guide with linearity of 0.1 μm/100mm and a linear motor to achieve linear feeding with high precision and high dynamic performance. The vertical spindle consists of an aerostatic bearing spindle with the rotation accuracy (axial) of 0.05 microns and Fork type flexible connection precision driving mechanism. The machining experiment on flying and grinding was carried out, the optimize machining parameters was gained by a series of experiment. Surface roughness of 2.4 nm has been obtained. The machining efficiency can be improved by six times using the combined method to produce the same machined surface quality.
High-efficiency machining methods for aviation materials
NASA Astrophysics Data System (ADS)
Kononov, V. K.
1991-07-01
The papers contained in this volume present results of theoretical and experimental studies aimed at increasing the efficiency of cutting tools during the machining of high-temperature materials and titanium alloys. Specific topics discussed include a study of the performance of disk cutters during the machining of flexible parts of a high-temperature alloy, VZhL14N; a study of the wear resistance of cutters of hard alloys of various types; effect of a deformed electric field on the precision of the electrochemical machining of gas turbine engine components; and efficient machining of parts of composite materials. The discussion also covers the effect of the technological process structure on the residual stress distribution in the blades of gas turbine engines; modeling of the multiparameter assembly of engineering products for a specified priority of geometrical output parameters; and a study of the quality of the surface and surface layer of specimens machined by a high-temperature pulsed plasma.
Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines
Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian
2014-01-01
A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729
CAM: A high-performance cellular-automaton machine
NASA Astrophysics Data System (ADS)
Toffoli, Tommaso
1984-01-01
CAM is a high-performance machine dedicated to the simulation of cellular automata and other distributed dynamical systems. Its speed is about one-thousand times greater than that of a general-purpose computer programmed to do the same task; in practical terms, this means that CAM can show the evolution of cellular automata on a color monitor with an update rate, dynamic range, and spatial resolution comparable to those of a Super-8 movie, thus permitting intensive interactive experimentation. Machines of this kind can open up novel fields of research, and in this context it is important that results be easy to obtain, reproduce, and transmit. For these reasons, in designing CAM it was important to achieve functional simplicity, high flexibility, and moderate production cost. We expect that many research groups will be able to own their own copy of the machine to do research with.
Engineered Surface Properties of Porous Tungsten from Cryogenic Machining
NASA Astrophysics Data System (ADS)
Schoop, Julius Malte
Porous tungsten is used to manufacture dispenser cathodes due to it refractory properties. Surface porosity is critical to functional performance of dispenser cathodes because it allows for an impregnated ceramic compound to migrate to the emitting surface, lowering its work function. Likewise, surface roughness is important because it is necessary to ensure uniform wetting of the molten impregnate during high temperature service. Current industry practice to achieve surface roughness and surface porosity requirements involves the use of a plastic infiltrant during machining. After machining, the infiltrant is baked and the cathode pellet is impregnated. In this context, cryogenic machining is investigated as a substitutionary process for the current plastic infiltration process. Along with significant reductions in cycle time and resource use, surface quality of cryogenically machined un-infiltrated (as-sintered) porous tungsten has been shown to significantly outperform dry machining. The present study is focused on examining the relationship between machining parameters and cooling condition on the as-machined surface integrity of porous tungsten. The effects of cryogenic pre-cooling, rake angle, cutting speed, depth of cut and feed are all taken into consideration with respect to machining-induced surface morphology. Cermet and Polycrystalline diamond (PCD) cutting tools are used to develop high performance cryogenic machining of porous tungsten. Dry and pre-heated machining were investigated as a means to allow for ductile mode machining, yet severe tool-wear and undesirable smearing limited the feasibility of these approaches. By using modified PCD cutting tools, high speed machining of porous tungsten at cutting speeds up to 400 m/min is achieved for the first time. Beyond a critical speed, brittle fracture and built-up edge are eliminated as the result of a brittle to ductile transition. A model of critical chip thickness ( hc ) effects based on cutting force, temperature and surface roughness data is developed and used to study the deformation mechanisms of porous tungsten under different machining conditions. It is found that when hmax = hc, ductile mode machining of otherwise highly brittle porous tungsten is possible. The value of hc is approximately the same as the average ligament size of the 80% density porous tungsten workpiece.
Filament winding technique, experiment and simulation analysis on tubular structure
NASA Astrophysics Data System (ADS)
Quanjin, Ma; Rejab, M. R. M.; Kaige, Jiang; Idris, M. S.; Harith, M. N.
2018-04-01
Filament winding process has emerged as one of the potential composite fabrication processes with lower costs. Filament wound products involve classic axisymmetric parts (pipes, rings, driveshafts, high-pressure vessels and storage tanks), non-axisymmetric parts (prismatic nonround sections and pipe fittings). Based on the 3-axis filament winding machine has been designed with the inexpensive control system, it is completely necessary to make a relative comparison between experiment and simulation on tubular structure. In this technical paper, the aim of this paper is to perform a dry winding experiment using the 3-axis filament winding machine and simulate winding process on the tubular structure using CADWIND software with 30°, 45°, 60° winding angle. The main result indicates that the 3-axis filament winding machine can produce tubular structure with high winding pattern performance with different winding angle. This developed 3-axis winding machine still has weakness compared to CAWIND software simulation results with high axes winding machine about winding pattern, turnaround impact, process error, thickness, friction impact etc. In conclusion, the 3-axis filament winding machine improvements and recommendations come up with its comparison results, which can intuitively understand its limitations and characteristics.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Machine- z: Rapid machine-learned redshift indicator for Swift gamma-ray bursts
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
2016-03-08
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce ‘machine-z’, a redshift prediction algorithm and a ‘high-z’ classifier for Swift GRBs based on machine learning. Our method relies exclusively onmore » canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve ~100 per cent recall. As a result, the most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.« less
Effects of pole flux distribution in a homopolar linear synchronous machine
NASA Astrophysics Data System (ADS)
Balchin, M. J.; Eastham, J. F.; Coles, P. C.
1994-05-01
Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.
A new class of high-G and long-duration shock testing machines
NASA Astrophysics Data System (ADS)
Rastegar, Jahangir
2018-03-01
Currently available methods and systems for testing components for survival and performance under shock loading suffer from several shortcomings for use to simulate high-G acceleration events with relatively long duration. Such events include most munitions firing and target impact, vehicular accidents, drops from relatively high heights, air drops, impact between machine components, and other similar events. In this paper, a new class of shock testing machines are presented that can be used to subject components to be tested to high-G acceleration pulses of prescribed amplitudes and relatively long durations. The machines provide for highly repeatable testing of components. The components are mounted on an open platform for ease of instrumentation and video recording of their dynamic behavior during shock loading tests.
Evaluating the Security of Machine Learning Algorithms
2008-05-20
Two far-reaching trends in computing have grown in significance in recent years. First, statistical machine learning has entered the mainstream as a...computing applications. The growing intersection of these trends compels us to investigate how well machine learning performs under adversarial conditions... machine learning has a structure that we can use to build secure learning systems. This thesis makes three high-level contributions. First, we develop a
High speed operation of permanent magnet machines
NASA Astrophysics Data System (ADS)
El-Refaie, Ayman M.
This work proposes methods to extend the high-speed operating capabilities of both the interior PM (IPM) and surface PM (SPM) machines. For interior PM machines, this research has developed and presented the first thorough analysis of how a new bi-state magnetic material can be usefully applied to the design of IPM machines. Key elements of this contribution include identifying how the unique properties of the bi-state magnetic material can be applied most effectively in the rotor design of an IPM machine by "unmagnetizing" the magnet cavity center posts rather than the outer bridges. The importance of elevated rotor speed in making the best use of the bi-state magnetic material while recognizing its limitations has been identified. For surface PM machines, this research has provided, for the first time, a clear explanation of how fractional-slot concentrated windings can be applied to SPM machines in order to achieve the necessary conditions for optimal flux weakening. A closed-form analytical procedure for analyzing SPM machines designed with concentrated windings has been developed. Guidelines for designing SPM machines using concentrated windings in order to achieve optimum flux weakening are provided. Analytical and numerical finite element analysis (FEA) results have provided promising evidence of the scalability of the concentrated winding technique with respect to the number of poles, machine aspect ratio, and output power rating. Useful comparisons between the predicted performance characteristics of SPM machines equipped with concentrated windings and both SPM and IPM machines designed with distributed windings are included. Analytical techniques have been used to evaluate the impact of the high pole number on various converter performance metrics. Both analytical techniques and FEA have been used for evaluating the eddy-current losses in the surface magnets due to the stator winding subharmonics. Techniques for reducing these losses have been investigated. A 6kW, 36slot/30pole prototype SPM machine has been designed and built. Experimental measurements have been used to verify the analytical and FEA results. These test results have demonstrated that wide constant-power speed range can be achieved. Other important machine features such as the near-sinusoidal back-emf, high efficiency, and low cogging torque have also been demonstrated.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
High frequency testing of rubber mounts.
Vahdati, Nader; Saunders, L Ken Lauderbaugh
2002-04-01
Rubber and fluid-filled rubber engine mounts are commonly used in automotive and aerospace applications to provide reduced cabin noise and vibration, and/or motion accommodations. In certain applications, the rubber mount may operate at frequencies as high as 5000 Hz. Therefore, dynamic stiffness of the mount needs to be known in this frequency range. Commercial high frequency test machines are practically nonexistent, and the best high frequency test machine on the market is only capable of frequencies as high as 1000 Hz. In this paper, a high frequency test machine is described that allows test engineers to study the high frequency performance of rubber mounts at frequencies up to 5000 Hz.
Held, Elizabeth; Cape, Joshua; Tintle, Nathan
2016-01-01
Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.
NASA Astrophysics Data System (ADS)
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
A comparative study on performance of CBN inserts when turning steel under dry and wet conditions
NASA Astrophysics Data System (ADS)
Abdullah Bagaber, Salem; Razlan Yusoff, Ahmad
2017-10-01
Cutting fluids is the most unsustainable components of machining processes, it is negatively impacting on the environmental and additional energy required. Due to its high strength and corrosion resistance, the machinability of stainless steel has attracted considerable interest. This study aims to evaluate performance of cubic boron nitride (CBN) inserts for the machining parameters includes the power consumption and surface roughness. Due to the high single cutting-edge cost of CBN, the performance of significant is importance for hard finish turning. The present work also deals with a comparative study on power consumption and surface roughness under dry and flood conditions. Turning process of the stainless steel 316 was performed. A response surface methodology based box-behnken design (BBD) was utilized for statistical analysis. The optimum process parameters are determined as the overall performance index. The comparison study has been done between dry and wet stainless-steel cut in terms of minimum value of energy and surface roughness. The result shows the stainless still can be machined under dry condition with 18.57% improvement of power consumption and acceptable quality compare to the wet cutting. The CBN tools under dry cutting stainless steel can be used to reduce the environment impacts in terms of no cutting fluid use and less energy required which is effected in machining productivity and profit.
Cost-effective lightweight mirrors for aerospace and defense
NASA Astrophysics Data System (ADS)
Woodard, Kenneth S.; Comstock, Lovell E.; Wamboldt, Leonard; Roy, Brian P.
2015-05-01
The demand for high performance, lightweight mirrors was historically driven by aerospace and defense (A&D) but now we are also seeing similar requirements for commercial applications. These applications range from aerospace-like platforms such as small unmanned aircraft for agricultural, mineral and pollutant aerial mapping to an eye tracking gimbaled mirror for optometry offices. While aerospace and defense businesses can often justify the high cost of exotic, low density materials, commercial products rarely can. Also, to obtain high performance with low overall optical system weight, aspheric surfaces are often prescribed. This may drive the manufacturing process to diamond machining thus requiring the reflective side of the mirror to be a diamond machinable material. This paper summarizes the diamond machined finishing and coating of some high performance, lightweight designs using non-exotic substrates to achieve cost effective mirrors. The results indicate that these processes can meet typical aerospace and defense requirements but may also be competitive in some commercial applications.
Performance testing of a high frequency link converter for Space Station power distribution system
NASA Technical Reports Server (NTRS)
Sul, S. K.; Alan, I.; Lipo, T. A.
1989-01-01
The testing of a brassboard version of a 20-kHz high-frequency ac voltage link prototype converter dynamics for Space Station application is presented. The converter is based on a three-phase six-pulse bridge concept. The testing includes details of the operation of the converter when it is driving an induction machine source/load. By adapting a field orientation controller (FOC) to the converter, four-quadrant operation of the induction machine from the converter has been achieved. Circuit modifications carried out to improve the performance of the converter are described. The performance of two 400-Hz induction machines powered by the converter with simple V/f regulation mode is reported. The testing and performance results for the converter utilizing the FOC, which provides the capability for rapid torque changes, speed reversal, and four-quadrant operation, are reported.
The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing
NASA Astrophysics Data System (ADS)
Yilmazkaya, Emre; Ozcelik, Yilmaz
2016-02-01
Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.
High-throughput state-machine replication using software transactional memory.
Zhao, Wenbing; Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin
2016-11-01
State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload.
High-throughput state-machine replication using software transactional memory
Yang, William; Zhang, Honglei; Yang, Jack; Luo, Xiong; Zhu, Yueqin; Yang, Mary; Luo, Chaomin
2017-01-01
State-machine replication is a common way of constructing general purpose fault tolerance systems. To ensure replica consistency, requests must be executed sequentially according to some total order at all non-faulty replicas. Unfortunately, this could severely limit the system throughput. This issue has been partially addressed by identifying non-conflicting requests based on application semantics and executing these requests concurrently. However, identifying and tracking non-conflicting requests require intimate knowledge of application design and implementation, and a custom fault tolerance solution developed for one application cannot be easily adopted by other applications. Software transactional memory offers a new way of constructing concurrent programs. In this article, we present the mechanisms needed to retrofit existing concurrency control algorithms designed for software transactional memory for state-machine replication. The main benefit for using software transactional memory in state-machine replication is that general purpose concurrency control mechanisms can be designed without deep knowledge of application semantics. As such, new fault tolerance systems based on state-machine replications with excellent throughput can be easily designed and maintained. In this article, we introduce three different concurrency control mechanisms for state-machine replication using software transactional memory, namely, ordered strong strict two-phase locking, conventional timestamp-based multiversion concurrency control, and speculative timestamp-based multiversion concurrency control. Our experiments show that speculative timestamp-based multiversion concurrency control mechanism has the best performance in all types of workload, the conventional timestamp-based multiversion concurrency control offers the worst performance due to high abort rate in the presence of even moderate contention between transactions. The ordered strong strict two-phase locking mechanism offers the simplest solution with excellent performance in low contention workload, and fairly good performance in high contention workload. PMID:29075049
Contrasting State-of-the-Art in the Machine Scoring of Short-Form Constructed Responses
ERIC Educational Resources Information Center
Shermis, Mark D.
2015-01-01
This study compared short-form constructed responses evaluated by both human raters and machine scoring algorithms. The context was a public competition on which both public competitors and commercial vendors vied to develop machine scoring algorithms that would match or exceed the performance of operational human raters in a summative high-stakes…
NASA Astrophysics Data System (ADS)
Matyunin, V. M.; Marchenkov, A. Yu.; Demidov, A. N.; Karimbekov, M. A.
2017-12-01
It is shown that depth-sensing indentation can be used to perform express control of the mechanical properties of high-strength and hard-to-machine materials. This control can be performed at various stages of a technological cycle of processing materials and parts without preparing and testing tensile specimens, which will significantly reduce the consumption of materials, time, and labor.
New Cogging Torque Reduction Methods for Permanent Magnet Machine
NASA Astrophysics Data System (ADS)
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
Permanent magnet type motors (PMs) especially permanent magnet synchronous motor (PMSM) are expanding its limbs in industrial application system and widely used in various applications. The key features of this machine include high power and torque density, extending speed range, high efficiency, better dynamic performance and good flux-weakening capability. Nevertheless, high in cogging torque, which may cause noise and vibration, is one of the threat of the machine performance. Therefore, with the aid of 3-D finite element analysis (FEA) and simulation using JMAG Designer, this paper proposed new method for cogging torque reduction. Based on the simulation, methods of combining the skewing with radial pole pairing method and skewing with axial pole pairing method reduces the cogging torque effect up to 71.86% and 65.69% simultaneously.
Uhlig, Johannes; Uhlig, Annemarie; Kunze, Meike; Beissbarth, Tim; Fischer, Uwe; Lotz, Joachim; Wienbeck, Susanne
2018-05-24
The purpose of this study is to evaluate the diagnostic performance of machine learning techniques for malignancy prediction at breast cone-beam CT (CBCT) and to compare them to human readers. Five machine learning techniques, including random forests, back propagation neural networks (BPN), extreme learning machines, support vector machines, and K-nearest neighbors, were used to train diagnostic models on a clinical breast CBCT dataset with internal validation by repeated 10-fold cross-validation. Two independent blinded human readers with profound experience in breast imaging and breast CBCT analyzed the same CBCT dataset. Diagnostic performance was compared using AUC, sensitivity, and specificity. The clinical dataset comprised 35 patients (American College of Radiology density type C and D breasts) with 81 suspicious breast lesions examined with contrast-enhanced breast CBCT. Forty-five lesions were histopathologically proven to be malignant. Among the machine learning techniques, BPNs provided the best diagnostic performance, with AUC of 0.91, sensitivity of 0.85, and specificity of 0.82. The diagnostic performance of the human readers was AUC of 0.84, sensitivity of 0.89, and specificity of 0.72 for reader 1 and AUC of 0.72, sensitivity of 0.71, and specificity of 0.67 for reader 2. AUC was significantly higher for BPN when compared with both reader 1 (p = 0.01) and reader 2 (p < 0.001). Machine learning techniques provide a high and robust diagnostic performance in the prediction of malignancy in breast lesions identified at CBCT. BPNs showed the best diagnostic performance, surpassing human readers in terms of AUC and specificity.
Montare, Alberto
2013-06-01
The three classical Donders' reaction time (RT) tasks (simple, choice, and discriminative RTs) were employed to compare reaction time scores from college students obtained by use of Montare's simplest chronoscope (meterstick) methodology to scores obtained by use of a digital-readout multi-choice reaction timer (machine). Five hypotheses were tested. Simple RT, choice RT, and discriminative RT were faster when obtained by meterstick than by machine. The meterstick method showed higher reliability than the machine method and was less variable. The meterstick method of the simplest chronoscope may help to alleviate the longstanding problems of low reliability and high variability of reaction time performances; while at the same time producing faster performance on Donders' simple, choice and discriminative RT tasks than the machine method.
NASA Astrophysics Data System (ADS)
Zhadanovsky, Boris; Sinenko, Sergey
2018-03-01
Economic indicators of construction work, particularly in high-rise construction, are directly related to the choice of optimal number of machines. The shortage of machinery makes it impossible to complete the construction & installation work on scheduled time. Rates of performance of construction & installation works and labor productivity during high-rise construction largely depend on the degree of provision of construction project with machines (level of work mechanization). During calculation of the need for machines in construction projects, it is necessary to ensure that work is completed on scheduled time, increased level of complex mechanization, increased productivity and reduction of manual work, and improved usage and maintenance of machine fleet. The selection of machines and determination of their numbers should be carried out by using formulas presented in this work.
NASA Astrophysics Data System (ADS)
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib
2017-09-12
Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
A High Performance Torque Sensor for Milling Based on a Piezoresistive MEMS Strain Gauge
Qin, Yafei; Zhao, Yulong; Li, Yingxue; Zhao, You; Wang, Peng
2016-01-01
In high speed and high precision machining applications, it is important to monitor the machining process in order to ensure high product quality. For this purpose, it is essential to develop a dynamometer with high sensitivity and high natural frequency which is suited to these conditions. This paper describes the design, calibration and performance of a milling torque sensor based on piezoresistive MEMS strain. A detailed design study is carried out to optimize the two mutually-contradictory indicators sensitivity and natural frequency. The developed torque sensor principally consists of a thin-walled cylinder, and a piezoresistive MEMS strain gauge bonded on the surface of the sensing element where the shear strain is maximum. The strain gauge includes eight piezoresistances and four are connected in a full Wheatstone circuit bridge, which is used to measure the applied torque force during machining procedures. Experimental static calibration results show that the sensitivity of torque sensor has been improved to 0.13 mv/Nm. A modal impact test indicates that the natural frequency of torque sensor reaches 1216 Hz, which is suitable for high speed machining processes. The dynamic test results indicate that the developed torque sensor is stable and practical for monitoring the milling process. PMID:27070620
Performance prediction: A case study using a multi-ring KSR-1 machine
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhu, Jianping
1995-01-01
While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.
Intelligible machine learning with malibu.
Langlois, Robert E; Lu, Hui
2008-01-01
malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prowell, Stacy J; Symons, Christopher T
2015-01-01
Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.
Rotordynamic Instability Problems in High-Performance Turbomachinery, 1986
NASA Technical Reports Server (NTRS)
1987-01-01
The first rotordynamics workshop proceedings (NASA CP-2133, 1980) emphasized a feeling of uncertainty in predicting the stability of characteristics of high-performance turbomachinery. In the second workshop proceedings (NASA CP-2250, 1982) these uncertainities were reduced through programs established to systematically resolve problems, with emphasis on experimental validiation of the forces that influence rotordynamics. In third proceedings (NASA CP-2338, 1984) many programs for predicting or measuring forces and force coefficients in high-performance turbomachinery produced results. Data became available for designing new machines with enhanced stability characteristics or for upgrading existing machines. The present workshop proceedings illustrates a continued trend toward a more unified view of rotordynamic instability problems and several encouraging new analytical developments.
Power electromagnetic strike machine for engineering-geological surveys
NASA Astrophysics Data System (ADS)
Usanov, K. M.; Volgin, A. V.; Chetverikov, E. A.; Kargin, V. A.; Moiseev, A. P.; Ivanova, Z. I.
2017-10-01
When implementing the processes of dynamic sensing of soils and pulsed nonexplosive seismic exploration, the most common and effective method is the strike one, which is provided by a variety of structure and parameters of pneumatic, hydraulic, electrical machines of strike action. The creation of compact portable strike machines which do not require transportation and use of mechanized means is important. A promising direction in the development of strike machines is the use of pulsed electromagnetic actuator characterized by relatively low energy consumption, relatively high specific performance and efficiency, and providing direct conversion of electrical energy into mechanical work of strike mass with linear movement trajectory. The results of these studies allowed establishing on the basis of linear electromagnetic motors the electromagnetic pulse machines with portable performance for dynamic sensing of soils and land seismic pulse of small depths.
Research Results Of Stress-Strain State Of Cutting Tool When Aviation Materials Turning
NASA Astrophysics Data System (ADS)
Serebrennikova, A. G.; Nikolaeva, E. P.; Savilov, A. V.; Timofeev, S. A.; Pyatykh, A. S.
2018-01-01
Titanium alloys and stainless steels are hard-to-machine of all the machining types. Cutting edge state of turning tool after machining titanium and high-strength aluminium alloys and corrosion-resistant high-alloy steel has been studied. Cutting forces and chip contact arears with the rake surface of cutter has been measured. The relationship of cutting forces and residual stresses are shown. Cutting forces and residual stresses vs value of cutting tool rake angle relation were obtained. Measurements of residual stresses were performed by x-ray diffraction.
Tribological performance of Zinc soft metal coatings in solid lubrication
NASA Astrophysics Data System (ADS)
Regalla, Srinivasa Prakash; Krishnan Anirudh, V.; Reddy Narala, Suresh Kumar
2018-04-01
Solid lubrication by soft coatings is an important technique for superior tribological performance in machine contacts involving high pressures. Coating with soft materials ensures that the subsurface machine component wear decreases, ensuring longer life. Several soft metal coatings have been studied but zinc coatings have not been studied much. This paper essentially deals with the soft coating by zinc through electroplating on hard surfaces, which are subsequently tested in sliding experiments for tribological performance. The hardness and film thickness values have been found out, the coefficient of friction of the zinc coating has been tested using a pin on disc wear testing machine and the results of the same have been presented.
Performance Analysis of Ivshmem for High-Performance Computing in Virtual Machines
NASA Astrophysics Data System (ADS)
Ivanovic, Pavle; Richter, Harald
2018-01-01
High-Performance computing (HPC) is rarely accomplished via virtual machines (VMs). In this paper, we present a remake of ivshmem which can change this. Ivshmem was a shared memory (SHM) between virtual machines on the same server, with SHM-access synchronization included, until about 5 years ago when newer versions of Linux and its virtualization library libvirt evolved. We restored that SHM-access synchronization feature because it is indispensable for HPC and made ivshmem runnable with contemporary versions of Linux, libvirt, KVM, QEMU and especially MPICH, which is an implementation of MPI - the standard HPC communication library. Additionally, MPICH was transparently modified by us to get ivshmem included, resulting in a three to ten times performance improvement compared to TCP/IP. Furthermore, we have transparently replaced MPI_PUT, a single-side MPICH communication mechanism, by an own MPI_PUT wrapper. As a result, our ivshmem even surpasses non-virtualized SHM data transfers for block lengths greater than 512 KBytes, showing the benefits of virtualization. All improvements were possible without using SR-IOV.
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Wang, R.; Secunde, R.
1992-01-01
A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
The ultrasonic machining of silicon carbide / alumina composites
NASA Astrophysics Data System (ADS)
Nicholson, Garth Martyn John
Silicon carbide fibre reinforced alumina is a ceramic composite which was developed in conjunction with the Rolls-Royce Aerospace Group. The material is intended for use in the latest generation of jet engines, specifically for high temperature applications such as flame holders, combustor barrel segments and turbine blade tip seals. The material in question has properties which have been engineered by optimizing fibre volume fractions, weaves and fibre interface materials to meet the following main requirements : high thermal resistance, high thermal shock resistance and low density.Components intended for manufacture using this material will use the "direct metal oxidation" (DIMOX) method. This process involves manufacturing a near net shape component from the woven fibre matting, and infiltrating the matting with the alumina matrix material. Some of the components outlined require high tolerance features to be included in their design. The combustor barrel segments for example require slots to be formed within them for sealing purposes, the dimensions of these features preclude their formation using DIMOX, and therefore require a secondary process to be performed. Conventional machining techniques such as drilling, turning and milling cannot be used because of the brittle nature of the material. Electrodischarge machining (E.D.M.) cannot be used since the material is an insulator. Electrochemical machining (E.C.M.) cannot be used since the material is chemically inert. One machining method which could be used is ultrasonic machining (U.S.M.).The research programme investigated the feasibility of using ultrasonic machining as a manufacturing method for this new fibre reinforced composite. Two variations of ultrasonic machining were used : ultrasonic drilling and ultrasonic milling. Factors such as dimensional accuracy, surface roughness and delamination effects were examined. Previously performed ultrasonic machining experimental programmes were reviewed, as well as process models which have been developed. The process models were found to contain empirical constants which usually require specific material data for their calculation.Since a limited amount of the composite was available, and ultrasonic machining has many process variables, a Taguchi factorial experiment was conducted in order to ascertain the most relevant factors in machining. A full factorial experiment was then performed using the relevant factors. Techniques used in the research included both optical and scanning electron microscopy, surface roughness analysis, x-ray analysis and finite element stress analysis. A full set of machining data was obtained including relationships between the factors examined and both material removal rates, and surface roughness values. An attempt was made to explain these findings by examining established brittle fracture mechanisms. These established mechanisms did not seem to apply entirely to this material, an alternative method of material removal is therefore proposed. It is hoped that the data obtained from this research programme may contribute to the development of a more realistic mathematical model.
Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly
2013-01-01
High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. PMID:23261652
Measured impacts of high efficiency domestic clothes washers in a community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, J.; Rizy, T.
1998-07-01
The US market for domestic clothes washers is currently dominated by conventional vertical-axis washers that typically require approximately 40 gallons of water for each wash load. Although the current market for high efficiency clothes washers that use much less water and energy is quite small, it is growing slowly as manufacturers make machines based on tumble action, horizontal-axis designs available and as information about the performance and benefits of such machines is developed and made available to consumers. To help build awareness of these benefits and to accelerate markets for high efficiency washers, the Department of Energy (DOE), under itsmore » ENERGY STAR{reg_sign} Program and in cooperation with a major manufacturers of high efficiency washers, conducted a field evaluation of high efficiency washers using Bern, Kansas as a test bed. Baseline washing machine performance data as well as consumer washing behavior were obtained from data collected on the existing machines of more than 100 participants in this instrumented study. Following a 2-month initial study period, all conventional machines were replaced by high efficiency, tumble-action washers, and the study continued for 3 months. Based on measured data from over 20,000 loads of laundry, the impact of the washer replacement on (1) individual customers` energy and water consumption, (2) customers` laundry habits and perceptions, and (3) the community`s water supply and waste water systems were determined. The study, its findings, and how information from the experiment was used to improve national awareness of high efficiency clothes washer benefits are described in this paper.« less
Analysis and design of asymmetrical reluctance machine
NASA Astrophysics Data System (ADS)
Harianto, Cahya A.
Over the past few decades the induction machine has been chosen for many applications due to its structural simplicity and low manufacturing cost. However, modest torque density and control challenges have motivated researchers to find alternative machines. The permanent magnet synchronous machine has been viewed as one of the alternatives because it features higher torque density for a given loss than the induction machine. However, the assembly and permanent magnet material cost, along with safety under fault conditions, have been concerns for this class of machine. An alternative machine type, namely the asymmetrical reluctance machine, is proposed in this work. Since the proposed machine is of the reluctance machine type, it possesses desirable feature, such as near absence of rotor losses, low assembly cost, low no-load rotational losses, modest torque ripple, and rather benign fault conditions. Through theoretical analysis performed herein, it is shown that this machine has a higher torque density for a given loss than typical reluctance machines, although not as high as the permanent magnet machines. Thus, the asymmetrical reluctance machine is a viable and advantageous machine alternative where the use of permanent magnet machines are undesirable.
NASA Astrophysics Data System (ADS)
Yu, Jianbo
2015-12-01
Prognostics is much efficient to achieve zero-downtime performance, maximum productivity and proactive maintenance of machines. Prognostics intends to assess and predict the time evolution of machine health degradation so that machine failures can be predicted and prevented. A novel prognostics system is developed based on the data-model-fusion scheme using the Bayesian inference-based self-organizing map (SOM) and an integration of logistic regression (LR) and high-order particle filtering (HOPF). In this prognostics system, a baseline SOM is constructed to model the data distribution space of healthy machine under an assumption that predictable fault patterns are not available. Bayesian inference-based probability (BIP) derived from the baseline SOM is developed as a quantification indication of machine health degradation. BIP is capable of offering failure probability for the monitored machine, which has intuitionist explanation related to health degradation state. Based on those historic BIPs, the constructed LR and its modeling noise constitute a high-order Markov process (HOMP) to describe machine health propagation. HOPF is used to solve the HOMP estimation to predict the evolution of the machine health in the form of a probability density function (PDF). An on-line model update scheme is developed to adapt the Markov process changes to machine health dynamics quickly. The experimental results on a bearing test-bed illustrate the potential applications of the proposed system as an effective and simple tool for machine health prognostics.
Development of plasma chemical vaporization machining
NASA Astrophysics Data System (ADS)
Mori, Yuzo; Yamauchi, Kazuto; Yamamura, Kazuya; Sano, Yasuhisa
2000-12-01
Conventional machining processes, such as turning, grinding, or lapping are still applied for many materials including functional ones. But those processes are accompanied with the formation of a deformed layer, so that machined surfaces cannot perform their original functions. In order to avoid such points, plasma chemical vaporization machining (CVM) has been developed. Plasma CVM is a chemical machining method using neutral radicals, which are generated by the atmospheric pressure plasma. By using a rotary electrode for generation of plasma, a high density of neutral radicals was formed, and we succeeded in obtaining high removal rate of several microns to several hundred microns per minute for various functional materials such as fused silica, single crystal silicon, molybdenum, tungsten, silicon carbide, and diamond. Especially, a high removal rate equal to lapping in the mechanical machining of fused silica and silicon was realized. 1.4 nm (p-v) was obtained as a surface roughness in the case of machining a silicon wafer. The defect density of a silicon wafer surface polished by various machining method was evaluated by the surface photo voltage spectroscopy. As a result, the defect density of the surface machined by plasma CVM was under 1/100 in comparison with the surface machined by mechanical polishing and argon ion sputtering, and very low defect density which was equivalent to the chemical etched surface was realized. A numerically controlled CVM machine for x-ray mirror fabrication is detailed in the accompanying article in this issue.
NASA Astrophysics Data System (ADS)
Mehmood, Shahid; Shah, Masood; Pasha, Riffat Asim; Sultan, Amir
2017-10-01
The effect of electric discharge machining (EDM) on surface quality and consequently on the fatigue performance of Al 2024 T6 is investigated. Five levels of discharge current are analyzed, while all other electrical and nonelectrical parameters are kept constant. At each discharge current level, dog-bone specimens are machined by generating a peripheral notch at the center. The fatigue tests are performed on four-point rotating bending machine at room temperature. For comparison purposes, fatigue tests are also performed on the conventionally machined specimens. Linearized SN curves for 95% failure probability and with four different confidence levels (75, 90, 95 and 99%) are plotted for each discharge current level as well as for conventionally machined specimens. These plots show that the electric discharge machined (EDMed) specimens give inferior fatigue behavior as compared to conventionally machined specimen. Moreover, discharge current inversely affects the fatigue life, and this influence is highly pronounced at lower stresses. The EDMed surfaces are characterized by surface properties that could be responsible for change in fatigue life such as surface morphology, surface roughness, white layer thickness, microhardness and residual stresses. It is found that all these surface properties are affected by changing discharge current level. However, change in fatigue life by discharge current could not be associated independently to any single surface property.
Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W
2015-08-01
Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, R.; Sulaiman, E.; Soomro, H. A.; Jusoh, L. I.; Bahrim, F. S.; Omar, M. F.
2017-08-01
The recent change in innovation and employments of high-temperature magnets, permanent magnet flux switching machine (PMFSM) has turned out to be one of the suitable contenders for seaward boring, however, less intended for downhole because of high atmospheric temperature. Subsequently, this extensive review manages the design enhancement and performance examination of external rotor PMFSM for the downhole application. Preparatory, the essential design parameters required for machine configuration are computed numerically. At that point, the design enhancement strategy is actualized through deterministic technique. At last, preliminary and refined execution of the machine is contrasted and as a consequence, the yield torque is raised from 16.39Nm to 33.57Nm while depreciating the cogging torque and PM weight up to 1.77Nm and 0.79kg, individually. In this manner, it is inferred that purposed enhanced design of 12slot-22pole with external rotor is convenient for the downhole application.
Rapid Assemblers for Voxel-Based VLSI Robotics
2014-02-12
relied on coin- cell batteries with high energy density, but low power density. Each of the actuators presented requires relatively high power...The device consists of a low power DC- DC low to high voltage converter operated by 4A cell batteries and an assembler, which is a grid of electrodes...design, simulate and fabricate complex 3D machines, as well as to repair, adapt and recycle existing machines, and to perform rigorous design
Christakis, Panos G; Braga-Mele, Rosa M
2012-02-01
To compare the intraoperative performance and postoperative outcomes of 3 phacoemulsification machines that use different modes. Kensington Eye Institute, Toronto, Ontario, Canada. Comparative case series. This chart and video review comprised consecutive eligible patients who had phacoemulsification by the same surgeon using a Whitestar Signature Ellips-FX (transversal), Infiniti-Ozil-IP (torsional), or Stellaris (longitudinal) machine. The review included 98 patients. Baseline characteristics in the groups were similar; the mean nuclear sclerosis grade was 2.0 ± 0.8. There were no significant intraoperative complications. The torsional machine averaged less phacoemulsification needle time (83 ± 33 seconds) than the transversal (99 ± 40 seconds; P=.21) or longitudinal (110 ± 45 seconds; P=.02) machines; the difference was accentuated in cases with high-grade nuclear sclerosis. The torsional machine had less chatter and better followability than the transversal or longitudinal machines (P<.001). The torsional and longitudinal machines had better anterior chamber stability than the transversal machine (P<.001). Postoperatively, the torsional machine yielded less central corneal edema than the transversal (P<.001) and longitudinal (P=.04) machines, corresponding to a smaller increase in mean corneal thickness (torsional 5%, transversal 10%, longitudinal 12%; P=.04). Also, the torsional machine had better 1-day postoperative visual acuities (P<.001). All 3 phacoemulsification machines were effective with no significant intraoperative complications. The torsional machine outperformed the transversal and longitudinal machines, with a lower mean needle time, less chatter, and improved followability. This corresponded to less corneal edema 1 day postoperatively and better visual acuity. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Using human brain activity to guide machine learning.
Fong, Ruth C; Scheirer, Walter J; Cox, David D
2018-03-29
Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Fast Fourier Transform algorithm design and tradeoffs
NASA Technical Reports Server (NTRS)
Kamin, Ray A., III; Adams, George B., III
1988-01-01
The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce ‘machine-z’, a redshift prediction algorithm and a ‘high-z’ classifier for Swift GRBs based on machine learning. Our method relies exclusively onmore » canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve ~100 per cent recall. As a result, the most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.« less
Vibration Damping Analysis of Lightweight Structures in Machine Tools
Aggogeri, Francesco; Borboni, Alberto; Merlo, Angelo; Pellegrini, Nicola; Ricatto, Raffaele
2017-01-01
The dynamic behaviour of a machine tool (MT) directly influences the machining performance. The adoption of lightweight structures may reduce the effects of undesired vibrations and increase the workpiece quality. This paper aims to present and compare a set of hybrid materials that may be excellent candidates to fabricate the MT moving parts. The selected materials have high dynamic characteristics and capacity to dampen mechanical vibrations. In this way, starting from the kinematic model of a milling machine, this study evaluates a number of prototypes made of Al foam sandwiches (AFS), Al corrugated sandwiches (ACS) and composite materials reinforced by carbon fibres (CFRP). These prototypes represented the Z-axis ram of a commercial milling machine. The static and dynamical properties have been analysed by using both finite element (FE) simulations and experimental tests. The obtained results show that the proposed structures may be a valid alternative to the conventional materials of MT moving parts, increasing machining performance. In particular, the AFS prototype highlighted a damping ratio that is 20 times greater than a conventional ram (e.g., steel). Its application is particularly suitable to minimize unwanted oscillations during high-speed finishing operations. The results also show that the CFRP structure guarantees high stiffness with a weight reduced by 48.5%, suggesting effective applications in roughing operations, saving MT energy consumption. The ACS structure has a good trade-off between stiffness and damping and may represent a further alternative, if correctly evaluated. PMID:28772653
Ozcift, Akin; Gulten, Arif
2011-12-01
Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Lawrence Livermore National Laboratory ULTRA-350 Test Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, D J; Wulff, T A; Carlisle, K
2001-04-10
LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-1 and PERL-11). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less
Lawrence Livermore National Laboratory ULTRA-350 Test Bed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, D J; Wulff, T A; Carlisle, K
2001-04-10
LLNL has many in-house designed high precision machine tools. Some of these tools include the Large Optics Diamond Turning Machine (LODTM) [1], Diamond Turning Machine No.3 (DTM-3) and two Precision Engineering Research Lathes (PERL-I and PERL-II). These machines have accuracy in the sub-micron range and in most cases position resolution in the couple of nanometers range. All of these machines are built with similar underlying technologies. The machines use capstan drive technology, laser interferometer position feedback, tachometer velocity feedback, permanent magnet (PM) brush motors and analog velocity and position loop servo compensation [2]. The machine controller does not perform anymore » servo compensation it simply computes the differences between the commanded position and the actual position (the following error) and sends this to a D/A for the analog servo position loop. LLNL is designing a new high precision diamond turning machine. The machine is called the ULTRA 350 [3]. In contrast to many of the proven technologies discussed above, the plan for the new machine is to use brushless linear motors, high precision linear scales, machine controller motor commutation and digital servo compensation for the velocity and position loops. Although none of these technologies are new and have been in use in industry, applications of these technologies to high precision diamond turning is limited. To minimize the risks of these technologies in the new machine design, LLNL has established a test bed to evaluate these technologies for application in high precision diamond turning. The test bed is primarily composed of commercially available components. This includes the slide with opposed hydrostatic bearings, the oil system, the brushless PM linear motor, the two-phase input three-phase output linear motor amplifier and the system controller. The linear scales are not yet commercially available but use a common electronic output format. As of this writing, the final verdict for the use of these technologies is still out but the first part of the work has been completed with promising results. The goal of this part of the work was to close a servo position loop around a slide incorporating these technologies and to measure the performance. This paper discusses the tests that were setup for system evaluation and the results of the measurements made. Some very promising results include; slide positioning to nanometer level and slow speed slide direction reversal at less than 100nm/min with no observed discontinuities. This is very important for machine contouring in diamond turning. As a point of reference, at 100 nm/min it would take the slide almost 7 years to complete the full designed travel of 350 mm. This speed has been demonstrated without the use of a velocity sensor. The velocity is derived from the position sensor. With what has been learned on the test bed, the paper finishes with a brief comparison of the old and new technologies. The emphasis of this comparison will be on the servo performance as illustrated with bode plot diagrams.« less
NASA Astrophysics Data System (ADS)
Chang, En-Chih
2018-02-01
This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.
Variability in the skin exposure of machine operators exposed to cutting fluids.
Wassenius, O; Järvholm, B; Engström, T; Lillienberg, L; Meding, B
1998-04-01
This study describes a new technique for measuring skin exposure to cutting fluids and evaluates the variability of skin exposure among machine operators performing cyclic (repetitive) work. The technique is based on video recording and subsequent analysis of the video tape by means of computer-synchronized video equipment. The time intervals at which the machine operator's hand was exposed to fluid were registered, and the total wet time of the skin was calculated by assuming different evaporation times for the fluid. The exposure of 12 operators with different work methods was analyzed in 6 different workshops, which included a range of machine types, from highly automated metal cutting machines (ie, actual cutting and chip removal machines) requiring operator supervision to conventional metal cutting machines, where the operator was required to maneuver the machine and manually exchange products. The relative wet time varied between 0% and 100%. A significant association between short cycle time and high relative wet time was noted. However, there was no relationship between the degree of automatization of the metal cutting machines and wet time. The study shows that skin exposure to cutting fluids can vary considerably between machine operators involved in manufacturing processes using different types of metal cutting machines. The machine type was not associated with dermal wetness. The technique appears to give objective information about dermal wetness.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Achieving high performance on the Intel Paragon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.S.; Maccabe, B.; Riesen, R.
1993-11-01
When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from themore » nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.« less
Wear behavior of carbide tool coated with Yttria-stabilized zirconia nano particles.
NASA Astrophysics Data System (ADS)
Jadhav, Pavandatta M.; Reddy, Narala Suresh Kumar
2018-04-01
Wear mechanism takes predominant role in reducing the tool life during machining of Titanium alloy. Challenges of wear mechanisms such as variation in chip, high pressure loads and spring back are responsible for tool wear. In addition, many tool materials are inapt for machining due to low thermal conductivity and volume specific heat of these materials results in high cutting temperature during machining. To confront this issue Electrostatic Spray Coating (ESC) coating technique is utilized to enhance the tool life to an acceptable level. The Yttria Stabilized Zirconia (YSZ) acts as a thermal barrier coating having high thermal expansion coefficient and thermal shock resistance. This investigation focuses on the influence of YSZ nanocoating on the tungsten carbide tool material and improve the machinability of Ti-6Al-4V alloy. YSZ nano powder was coated on the tungsten carbide pin by using ESC technique. The coatings have been tested for wear and friction behavior by using a pin-on-disc tribological tester. The dry sliding wear test was performed on Titanium alloy (Ti-6Al-4V) disc and YSZ coated tungsten carbide (pin) at ambient atmosphere. The performance parameters like wear rate and temperature rise were considered upon performing the dry sliding test on Ti-6Al-4V alloy disc. The performance parameters were calculated by using coefficient of friction and frictional force values which were obtained from the pin on disc test. Substantial resistance to wear was achieved by the coating.
Machine learning and data science in soft materials engineering
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.
2018-01-01
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Machine learning and data science in soft materials engineering.
Ferguson, Andrew L
2018-01-31
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Gao, Hang; Wang, Xu; Guo, Dongming; Liu, Ziyuan
2018-01-01
Laser induced damage threshold (LIDT) is an important optical indicator for nonlinear Potassium Dihydrogen Phosphate (KDP) crystal used in high power laser systems. In this study, KDP optical crystals are initially machined with single point diamond turning (SPDT), followed by water dissolution ultra-precision polishing (WDUP) and then tested with 355 nm nanosecond pulsed-lasers. Power spectral density (PSD) analysis shows that WDUP process eliminates the laser-detrimental spatial frequencies band of micro-waviness on SPDT machined surface and consequently decreases its modulation effect on the laser beams. The laser test results show that LIDT of WDUP machined crystal improves and its stability has a significant increase by 72.1% compared with that of SPDT. Moreover, a subsequent ultrasonic assisted solvent cleaning process is suggested to have a positive effect on the laser performance of machined KDP crystal. Damage crater investigation indicates that the damage morphologies exhibit highly thermal explosion features of melted cores and brittle fractures of periphery material, which can be described with the classic thermal explosion model. The comparison result demonstrates that damage mechanisms for SPDT and WDUP machined crystal are the same and WDUP process reveals the real bulk laser resistance of KDP optical crystal by removing the micro-waviness and subsurface damage on SPDT machined surface. This improvement of WDUP method makes the LIDT more accurate and will be beneficial to the laser performance of KDP crystal. PMID:29534032
Jakobsen, Markus Due; Sundstrup, Emil; Andersen, Christoffer H; Bandholm, Thomas; Thorborg, Kristian; Zebis, Mette K; Andersen, Lars L
2012-12-01
While elastic resistance training, targeting the upper body is effective for strength training, the effect of elastic resistance training on lower body muscle activity remains questionable. The purpose of this study was to evaluate the EMG-angle relationship of the quadriceps muscle during 10-RM knee-extensions performed with elastic tubing and an isotonic strength training machine. 7 women and 9 men aged 28-67 years (mean age 44 and 41 years, respectively) participated. Electromyographic (EMG) activity was recorded in 10 muscles during the concentric and eccentric contraction phase of a knee extension exercise performed with elastic tubing and in training machine and normalized to maximal voluntary isometric contraction (MVC) EMG (nEMG). Knee joint angle was measured during the exercises using electronic inclinometers (range of motion 0-90°). When comparing the machine and elastic resistance exercises there were no significant differences in peak EMG of the rectus femoris (RF), vastus lateralis (VL), vastus medialis (VM) during the concentric contraction phase. However, during the eccentric phase, peak EMG was significantly higher (p<0.01) in RF and VM when performing knee extensions using the training machine. In VL and VM the EMG-angle pattern was different between the two training modalities (significant angle by exercise interaction). When using elastic resistance, the EMG-angle pattern peaked towards full knee extension (0°), whereas angle at peak EMG occurred closer to knee flexion position (90°) during the machine exercise. Perceived loading (Borg CR10) was similar during knee extensions performed with elastic tubing (5.7±0.6) compared with knee extensions performed in training machine (5.9±0.5). Knee extensions performed with elastic tubing induces similar high (>70% nEMG) quadriceps muscle activity during the concentric contraction phase, but slightly lower during the eccentric contraction phase, as knee extensions performed using an isotonic training machine. During the concentric contraction phase the two different conditions displayed reciprocal EMG-angle patterns during the range of motion. 5.
Scale effects and a method for similarity evaluation in micro electrical discharge machining
NASA Astrophysics Data System (ADS)
Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua
2016-08-01
Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.
Development of a Crush and Mix Machine for Composite Brick Fabrication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sothea, Kruy; Fazli, Nik; Hamdi, M.
2011-01-17
Currently, people are more and more concerned about the environmental protection. Municipal solid wastes (MSW) have bad effect on the environment and also human health. In addition, the amounts of municipal solid wastes are increasing due to the economic development, density of population, especially in the developing countries and they are recycled in a little percentage. To address this problem, the composite brick forming machine was designed and developed to make brick using combination of MSW and mortar. The machine consists of two independent parts, crusher and mixer part, and molding part. This paper explores the design of crusher andmore » mixer part. The crusher has ability to cut MSW such as wood, paper and plastic into small size. There are two mixers; one is used for making mortar and other use for making slurry. FEA analyses were carried out to address the suitable strength of the critical parts of the crusher which ensures that crusher can run properly with high efficiency. The experimentation of the crusher shows that it has high performance for cutting MSW. The mixers also work very well in high efficiency. The results of composite brick testing have been shown that ability of the machine can performance well. This is the innovation of crush and mix machine which is portable and economic by using MSW in replacement of sand.« less
Development of a Crush and Mix Machine for Composite Brick Fabrication
NASA Astrophysics Data System (ADS)
Sothea, Kruy; Fazli, Nik; Hamdi, M.; Aoyama, Hideki
2011-01-01
Currently, people are more and more concerned about the environmental protection. Municipal solid wastes (MSW) have bad effect on the environment and also human health. In addition, the amounts of municipal solid wastes are increasing due to the economic development, density of population, especially in the developing countries and they are recycled in a little percentage. To address this problem, the composite brick forming machine was designed and developed to make brick using combination of MSW and mortar. The machine consists of two independent parts, crusher and mixer part, and molding part. This paper explores the design of crusher and mixer part. The crusher has ability to cut MSW such as wood, paper and plastic into small size. There are two mixers; one is used for making mortar and other use for making slurry. FEA analyses were carried out to address the suitable strength of the critical parts of the crusher which ensures that crusher can run properly with high efficiency. The experimentation of the crusher shows that it has high performance for cutting MSW. The mixers also work very well in high efficiency. The results of composite brick testing have been shown that ability of the machine can performance well. This is the innovation of crush and mix machine which is portable and economic by using MSW in replacement of sand.
Sundstrup, Emil; Jakobsen, Markus D; Andersen, Christoffer H; Jay, Kenneth; Andersen, Lars L
2012-08-01
Swiss ball training is recommended as a low intensity modality to improve joint position, posture, balance, and neural feedback. However, proper training intensity is difficult to obtain during Swiss ball exercises whereas strengthening exercises on machines usually are performed to induce high level of muscle activation. To compare muscle activation as measured by electromyography (EMG) of global core and thigh muscles during abdominal crunches performed on Swiss ball with elastic resistance or on an isotonic training machine when normalized for training intensity. 42 untrained individuals (18 men and 24 women) aged 28-67 years participated in the study. EMG activity was measured in 13 muscles during 3 repetitions with a 10 RM load during both abdominal crunches on training ball with elastic resistance and in the same movement utilizing a training machine (seated crunch, Technogym, Cesena, Italy). The order of performance of the exercises was randomized, and EMG amplitude was normalized to maximum voluntary isometric contraction (MVIC) EMG. When comparing between muscles, normalized EMG was highest in the rectus abdominis (P<0.01) and the external obliques (P<0.01). However, crunches on Swiss ball with elastic resistance showed higher activity of the rectus abdominis than crunches performed on the machine (104±3.8 vs 84±3.8% nEMG respectively, P<0.0001). By contrast, crunches performed on Swiss ball induced lower activity of the rectus femoris than crunches in training machine (27±3.7 vs 65±3.8% nEMG respectively, P<0.0001) Further, gender, age and musculoskeletal pain did not significantly influence the findings. Crunches on a Swiss ball with added elastic resistance induces high rectus abdominis activity accompanied by low hip flexor activity which could be beneficial for individuals with low back pain. In opposition, the lower rectus abdominis activity and higher rectus femoris activity observed in machine warrant caution for individuals with lumbar pain. Importantly, both men and women, younger and elderly, and individuals with and without pain benefitted equally from the exercises.
Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe
NASA Astrophysics Data System (ADS)
Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.
1987-01-01
Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.
Technology of high-speed combined machining with brush electrode
NASA Astrophysics Data System (ADS)
Kirillov, O. N.; Smolentsev, V. P.; Yukhnevich, S. S.
2018-03-01
The new method was proposed for high-precision dimensional machining with a brush electrode when the true position of bundles of metal wire is adjusted by means of creating controlled centrifugal forces appeared due to the increased frequency of rotation of a tool. There are the ultimate values of circumferential velocity at which the bundles are pressed against a machined area of a workpiece in a stable manner despite the profile of the machined surface and variable stock of the workpiece. The special aspects of design of processing procedures for finishing standard parts, including components of products with low rigidity, are disclosed. The methodology of calculation and selection of processing modes which allow one to produce high-precision details and to provide corresponding surface roughness required to perform finishing operations (including the preparation of a surface for metal deposition) is presented. The production experience concerned with the use of high-speed combined machining with an unshaped tool electrode in knowledge-intensive branches of the machine-building industry for different types of production is analyzed. It is shown that the implementation of high-speed dimensional machining with an unshaped brush electrode allows one to expand the field of use of the considered process due to the application of a multipurpose tool in the form of a metal brush, as well as to obtain stable results of finishing and to provide the opportunities for long-term operation of the equipment without its changeover and readjustment.
Diamond turning machine controller implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrard, K.P.; Taylor, L.W.; Knight, B.F.
The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, themore » control computer hardware and software, are discussed in detail below.« less
NASA Astrophysics Data System (ADS)
Husin, Zhafir Aizat; Sulaiman, Erwan; Khan, Faisal; Mazlan, Mohamed Mubin Aizat; Othman, Syed Muhammad Naufal Syed
2015-05-01
This paper presents a new structure of 12slot-14pole field excitation flux switching motor (FEFSM) as an alternative candidate of non-Permanent Magnet (PM) machine for HEV drives. Design study, performance analysis and optimization of field excitation flux switching machine with non-rare-earth magnet for hybrid electric vehicle drive applications is done. The stator of projected machine consists of iron core made of electromagnetic steels, armature coils and field excitation coils as the only field mmf source. The rotor is consisted of only stack of iron and hence, it is reliable and appropriate for high speed operation. The design target is a machine with the maximum torque, power and power density, more than 210Nm, 123kW and 3.5kW/kg, respectively, which competes with interior permanent magnet synchronous machine used in existing hybrid electric vehicle. Some design feasibility studies on FEFSM based on 2D-FEA and deterministic optimization method will be applied to design the proposed machine.
R&D of high reliable refrigeration system for superconducting generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosoya, T.; Shindo, S.; Yaguchi, H.
1996-12-31
Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K
2014-09-04
In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.
The influence of maintenance quality of hemodialysis machines on hemodialysis efficiency.
Azar, Ahmad Taher
2009-01-01
Several studies suggest that there is a correlation between dose of dialysis and machine maintenance. However, in spite of the current practice, there are conflicting reports regarding the relationship between dose of dialysis or patient outcome, and machine maintenance. In order to evaluate the impact of hemodialysis machine maintenance on dialysis adequacy Kt/V and session performance, data were processed on 134 patients on 3-times-per-week dialysis regimens by dividing the patients into four groups and also dividing the hemodialysis machines into four groups according to their year of installation. The equilibrated dialysis dose eq Kt/V, urea reduction ratio (URR) and the overall equipment effectiveness (OEE) were calculated in each group to show the effect hemodialysis machine efficiency on the overall session performance. The average working time per machine per month was 270 hours. The cumulative number of hours according to the year of installation was: 26,122 hours for machines installed in 1998; 21,596 hours for machines installed in 1999, 8362 hours for those installed in 2003 and 2486 hours for those installed in 2005. The mean time between failures (MTBF) was 1.8, 2.1, 4.2 and 6 months between failures for machines installed in 1999, 1998, 2003 and 2005, respectively. Statistical analysis demonstrated that the dialysis dose eq Kt/V and URR were increased as the overall equipment effectiveness (OEE) increases with regular maintenance procedures. Maintenance has become one of the most expedient approaches to guarantee high machine dependability. The efficiency of dialysis machine is relevant in assuring a proper dialysis adequacy.
Tribology in secondary wood machining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, P.L.; Hawthorne, H.M.; Andiappan, J.
Secondary wood manufacturing covers a wide range of products from furniture, cabinets, doors and windows, to musical instruments. Many of these are now mass produced in sophisticated, high speed numerical controlled machines. The performance and the reliability of the tools are key to an efficient and economical manufacturing process as well as to the quality of the finished products. A program concerned with three aspects of tribology of wood machining, namely, tool wear, tool-wood friction characteristics and wood surface quality characterization, was set up in the Integrated Manufacturing Technologies Institute (IMTI) of the National Research Council of Canada. The studiesmore » include friction and wear mechanism identification and modeling, wear performance of surface-engineered tool materials, friction-induced vibration and cutting efficiency, and the influence of wear and friction on finished products. This research program underlines the importance of tribology in secondary wood manufacturing and at the same time adds new challenges to tribology research since wood is a complex, heterogeneous, material and its behavior during machining is highly sensitive to the surrounding environments and to the moisture content in the work piece.« less
Experimental Study in Taguchi Method on Surface Quality Predication of HSM
NASA Astrophysics Data System (ADS)
Ji, Yan; Li, Yueen
2018-05-01
Based on the study of ball milling mechanism and machining surface formation mechanism, the formation of high speed ball-end milling surface is a time-varying and cumulative Thermos-mechanical coupling process. The nature of this problem is that the uneven stress field and temperature field affect the machined surface Process, the performance of the processing parameters in the processing interaction in the elastic-plastic materials produced by the elastic recovery and plastic deformation. The surface quality of machining surface is characterized by multivariable nonlinear system. It is still an indispensable and effective method to study the surface quality of high speed ball milling by experiments.
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
Navarro, Pedro J.; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-01-01
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). PMID:28025565
Navarro, Pedro J; Fernández, Carlos; Borraz, Raúl; Alonso, Diego
2016-12-23
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
Cadena, Natalia L; Cue-Sampedro, Rodrigo; Siller, Héctor R; Arizmendi-Morquecho, Ana M; Rivera-Solorio, Carlos I; Di-Nardo, Santiago
2013-05-24
The manufacture of medical and aerospace components made of titanium alloys and other difficult-to-cut materials requires the parallel development of high performance cutting tools coated with materials capable of enhanced tribological and resistance properties. In this matter, a thin nanocomposite film made out of AlCrN (aluminum-chromium-nitride) was studied in this research, showing experimental work in the deposition process and its characterization. A heat-treated monolayer coating, competitive with other coatings in the machining of titanium alloys, was analyzed. Different analysis and characterizations were performed on the manufactured coating by scanning electron microscopy and energy-dispersive X-ray spectroscopy (SEM-EDXS), and X-ray diffraction (XRD). Furthermore, the mechanical behavior of the coating was evaluated through hardness test and tribology with pin-on-disk to quantify friction coefficient and wear rate. Finally, machinability tests using coated tungsten carbide cutting tools were executed in order to determine its performance through wear resistance, which is a key issue of cutting tools in high-end cutting at elevated temperatures. It was demonstrated that the specimen (with lower friction coefficient than previous research) is more efficient in machinability tests in Ti6Al4V alloys. Furthermore, the heat-treated monolayer coating presented better performance in comparison with a conventional monolayer of AlCrN coating.
Cadena, Natalia L.; Cue-Sampedro, Rodrigo; Siller, Héctor R.; Arizmendi-Morquecho, Ana M.; Rivera-Solorio, Carlos I.; Di-Nardo, Santiago
2013-01-01
The manufacture of medical and aerospace components made of titanium alloys and other difficult-to-cut materials requires the parallel development of high performance cutting tools coated with materials capable of enhanced tribological and resistance properties. In this matter, a thin nanocomposite film made out of AlCrN (aluminum–chromium–nitride) was studied in this research, showing experimental work in the deposition process and its characterization. A heat-treated monolayer coating, competitive with other coatings in the machining of titanium alloys, was analyzed. Different analysis and characterizations were performed on the manufactured coating by scanning electron microscopy and energy-dispersive X-ray spectroscopy (SEM-EDXS), and X-ray diffraction (XRD). Furthermore, the mechanical behavior of the coating was evaluated through hardness test and tribology with pin-on-disk to quantify friction coefficient and wear rate. Finally, machinability tests using coated tungsten carbide cutting tools were executed in order to determine its performance through wear resistance, which is a key issue of cutting tools in high-end cutting at elevated temperatures. It was demonstrated that the specimen (with lower friction coefficient than previous research) is more efficient in machinability tests in Ti6Al4V alloys. Furthermore, the heat-treated monolayer coating presented better performance in comparison with a conventional monolayer of AlCrN coating. PMID:28809266
Research on mechanical and sensoric set-up for high strain rate testing of high performance fibers
NASA Astrophysics Data System (ADS)
Unger, R.; Schegner, P.; Nocke, A.; Cherif, C.
2017-10-01
Within this research project, the tensile behavior of high performance fibers, such as carbon fibers, is investigated under high velocity loads. This contribution (paper) focuses on the clamp set-up of two testing machines. Based on a kinematic model, weight optimized clamps are designed and evaluated. By analyzing the complex dynamic behavior of conventional high velocity testing machines, it has been shown that the impact typically exhibits an elastic characteristic. This leads to barely predictable breaking speeds and will not work at higher speeds when acceleration force exceeds material specifications. Therefore, a plastic impact behavior has to be achieved, even at lower testing speeds. This type of impact behavior at lower speeds can be realized by means of some minor test set-up adaptions.
2017-01-01
Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Conclusions Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high performance. Colleague open-text comments that signal respect, professionalism, and being interpersonal may be key indicators of doctor’s performance. PMID:28298265
Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John
2017-03-15
Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high performance. Colleague open-text comments that signal respect, professionalism, and being interpersonal may be key indicators of doctor's performance. ©Chris Gibbons, Suzanne Richards, Jose Maria Valderas, John Campbell. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.03.2017.
Support vector machine in machine condition monitoring and fault diagnosis
NASA Astrophysics Data System (ADS)
Widodo, Achmad; Yang, Bo-Suk
2007-08-01
Recently, the issue of machine condition monitoring and fault diagnosis as a part of maintenance system became global due to the potential advantages to be gained from reduced maintenance costs, improved productivity and increased machine availability. This paper presents a survey of machine condition monitoring and fault diagnosis using support vector machine (SVM). It attempts to summarize and review the recent research and developments of SVM in machine condition monitoring and diagnosis. Numerous methods have been developed based on intelligent systems such as artificial neural network, fuzzy expert system, condition-based reasoning, random forest, etc. However, the use of SVM for machine condition monitoring and fault diagnosis is still rare. SVM has excellent performance in generalization so it can produce high accuracy in classification for machine condition monitoring and diagnosis. Until 2006, the use of SVM in machine condition monitoring and fault diagnosis is tending to develop towards expertise orientation and problem-oriented domain. Finally, the ability to continually change and obtain a novel idea for machine condition monitoring and fault diagnosis using SVM will be future works.
A high performance parallel algorithm for 1-D FFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, R.C.; Gustavson, F.G.; Zubair, M.
1994-12-31
In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less
Adapting human-machine interfaces to user performance.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2008-01-01
The goal of this study was to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user of a human-machine interface and the controlled device. In this experiment, subjects' high-dimensional finger motions remotely controlled the joint angles of a simulated planar 2-link arm, which was used to hit targets on a computer screen. Subjects were required to move the cursor at the endpoint of the simulated arm.
1986-05-01
was conducted in air, using a SATEC Systems computer-controlled servohydraulic testing machine. This machine uses a minicomputer (Digital PDP 11/34...overall test program) was run. This test was performed using a feature of the SATEC machine called combinatorial feedback, which allowed a user-defined...Rn) l/T + (in Es /A)/n (4.3) Q can be calculated from 0*: b Q=n (4.4) Creep data for DS MAR-M246, containing no Hafnium, from Reference 99 was used to
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Pre-use anesthesia machine check; certified anesthesia technician based quality improvement audit.
Al Suhaibani, Mazen; Al Malki, Assaf; Al Dosary, Saad; Al Barmawi, Hanan; Pogoku, Mahdhav
2014-01-01
Quality assurance of providing a work ready machine in multiple theatre operating rooms in a tertiary modern medical center in Riyadh. The aim of the following study is to keep high quality environment for workers and patients in surgical operating rooms. Technicians based audit by using key performance indicators to assure inspection, passing test of machine worthiness for use daily and in between cases and in case of unexpected failure to provide quick replacement by ready to use another anesthetic machine. The anesthetic machines in all operating rooms are daily and continuously inspected and passed as ready by technicians and verified by anesthesiologist consultant or assistant consultant. The daily records of each machines were collected then inspected for data analysis by quality improvement committee department for descriptive analysis and report the degree of staff compliance to daily inspection as "met" items. Replaced machine during use and overall compliance. Distractive statistic using Microsoft Excel 2003 tables and graphs of sums and percentages of item studied in this audit. Audit obtained highest compliance percentage and low rate of replacement of machine which indicate unexpected machine state of use and quick machine switch. The authors are able to conclude that following regular inspection and running self-check recommended by the manufacturers can contribute to abort any possibility of hazard of anesthesia machine failure during operation. Furthermore in case of unexpected reason to replace the anesthesia machine in quick maneuver contributes to high assured operative utilization of man machine inter-phase in modern surgical operating rooms.
Improvement of the COP of the LiBr-Water Double-Effect Absorption Cycles
NASA Astrophysics Data System (ADS)
Shitara, Atsushi
Prevention of the global warming has called for a great necessity for energy saving. This applies to the improvement of the COP of absorption chiller-heaters. We started the development of the high efficiency gas-fired double-effect absorption chiller-heater using LiBr-H2O to achieve target performance in short or middle term. To maintain marketability, the volume of the high efficiency machine has been set below the equal to the conventional machine. The absorption cycle technology for improving the COP and the element technology for downsizing the machine is necessary in this development. In this study, the former is investigated. In this report, first of all the target performance has been set at cooling COP of 1.35(on HHV), which is 0.35 higher than the COP of 1.0 for conventional machines in the market. This COP of 1.35 is practically close to the maximum limit achievable by double-effect absorption chiller-heater. Next, the design condition of each element to achieve the target performance and the effect of each mean to improve the COP are investigated. Moreover, as a result of comparing the various flows(series, parallel, reverse)to which the each mean is applied, it has been found the optimum cycle is the parallel flow.
Machine Learning methods for Quantitative Radiomic Biomarkers.
Parmar, Chintan; Grossmann, Patrick; Bussink, Johan; Lambin, Philippe; Aerts, Hugo J W L
2015-08-17
Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 ± 0.05, AUC = 0.65 ± 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 ± 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.
Improved Saturated Hydraulic Conductivity Pedotransfer Functions Using Machine Learning Methods
NASA Astrophysics Data System (ADS)
Araya, S. N.; Ghezzehei, T. A.
2017-12-01
Saturated hydraulic conductivity (Ks) is one of the fundamental hydraulic properties of soils. Its measurement, however, is cumbersome and instead pedotransfer functions (PTFs) are often used to estimate it. Despite a lot of progress over the years, generic PTFs that estimate hydraulic conductivity generally don't have a good performance. We develop significantly improved PTFs by applying state of the art machine learning techniques coupled with high-performance computing on a large database of over 20,000 soils—USKSAT and the Florida Soil Characterization databases. We compared the performance of four machine learning algorithms (k-nearest neighbors, gradient boosted model, support vector machine, and relevance vector machine) and evaluated the relative importance of several soil properties in explaining Ks. An attempt is also made to better account for soil structural properties; we evaluated the importance of variables derived from transformations of soil water retention characteristics and other soil properties. The gradient boosted models gave the best performance with root mean square errors less than 0.7 and mean errors in the order of 0.01 on a log scale of Ks [cm/h]. The effective particle size, D10, was found to be the single most important predictor. Other important predictors included percent clay, bulk density, organic carbon percent, coefficient of uniformity and values derived from water retention characteristics. Model performances were consistently better for Ks values greater than 10 cm/h. This study maximizes the extraction of information from a large database to develop generic machine learning based PTFs to estimate Ks. The study also evaluates the importance of various soil properties and their transformations in explaining Ks.
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Evaluating the electrical discharge machining (EDM) parameters with using carbon nanotubes
NASA Astrophysics Data System (ADS)
Sari, M. M.; Noordin, M. Y.; Brusa, E.
2012-09-01
Electrical discharge machining (EDM) is one of the most accurate non traditional manufacturing processes available for creating tiny apertures, complex or simple shapes and geometries within parts and assemblies. Performance of the EDM process is usually evaluated in terms of surface roughness, existence of cracks, voids and recast layer on the surface of product, after machining. Unfortunately, the high heat generated on the electrically discharged material during the EDM process decreases the quality of products. Carbon nanotubes display unexpected strength and unique electrical and thermal properties. Multi-wall carbon nanotubes are therefore on purpose added to the dielectric used in the EDM process to improve its performance when machining the AISI H13 tool steel, by means of copper electrodes. Some EDM parameters such as material removal rate, electrode wear rate, surface roughness and recast layer are here first evaluated, then compared to the outcome of EDM performed without using nanotubes mixed to the dielectric. Independent variables investigated are pulse on time, peak current and interval time. Experimental evidences show that EDM process operated by mixing multi-wall carbon nanotubes within the dielectric looks more efficient, particularly if machining parameters are set at low pulse of energy.
Entanglement-Based Machine Learning on a Quantum Computer
NASA Astrophysics Data System (ADS)
Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.
2015-03-01
Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.
Balachandran, Anoop T; Gandia, Kristine; Jacobs, Kevin A; Streiner, David L; Eltoukhy, Moataz; Signorile, Joseph F
2017-11-01
Power training has been shown to be more effective than conventional resistance training for improving physical function in older adults; however, most trials have used pneumatic machines during training. Considering that the general public typically has access to plate-loaded machines, the effectiveness and safety of power training using plate-loaded machines compared to pneumatic machines is an important consideration. The purpose of this investigation was to compare the effects of high-velocity training using pneumatic machines (Pn) versus standard plate-loaded machines (PL). Independently-living older adults, 60years or older were randomized into two groups: pneumatic machine (Pn, n=19) and plate-loaded machine (PL, n=17). After 12weeks of high-velocity training twice per week, groups were analyzed using an intention-to-treat approach. Primary outcomes were lower body power measured using a linear transducer and upper body power using medicine ball throw. Secondary outcomes included lower and upper body muscle muscle strength, the Physical Performance Battery (PPB), gallon jug test, the timed up-and-go test, and self-reported function using the Patient Reported Outcomes Measurement Information System (PROMIS) and an online video questionnaire. Outcome assessors were blinded to group membership. Lower body power significantly improved in both groups (Pn: 19%, PL: 31%), with no significant difference between the groups (Cohen's d=0.4, 95% CI (-1.1, 0.3)). Upper body power significantly improved only in the PL group, but showed no significant difference between the groups (Pn: 3%, PL: 6%). For balance, there was a significant difference between the groups favoring the Pn group (d=0.7, 95% CI (0.1, 1.4)); however, there were no statistically significant differences between groups for PPB, gallon jug transfer, muscle muscle strength, timed up-and-go or self-reported function. No serious adverse events were reported in either of the groups. Pneumatic and plate-loaded machines were effective in improving lower body power and physical function in older adults. The results suggest that power training can be safely and effectively performed by older adults using either pneumatic or plate-loaded machines. Copyright © 2017 Elsevier Inc. All rights reserved.
A machine learning-based framework to identify type 2 diabetes through electronic health records
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2016-01-01
Objective To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. Materials and methods We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. Results We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Discussion Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Conclusions Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. PMID:27919371
A machine learning-based framework to identify type 2 diabetes through electronic health records.
Zheng, Tao; Xie, Wei; Xu, Liling; He, Xiaoying; Zhang, Ya; You, Mingrong; Yang, Gong; Chen, You
2017-01-01
To discover diverse genotype-phenotype associations affiliated with Type 2 Diabetes Mellitus (T2DM) via genome-wide association study (GWAS) and phenome-wide association study (PheWAS), more cases (T2DM subjects) and controls (subjects without T2DM) are required to be identified (e.g., via Electronic Health Records (EHR)). However, existing expert based identification algorithms often suffer in a low recall rate and could miss a large number of valuable samples under conservative filtering standards. The goal of this work is to develop a semi-automated framework based on machine learning as a pilot study to liberalize filtering criteria to improve recall rate with a keeping of low false positive rate. We propose a data informed framework for identifying subjects with and without T2DM from EHR via feature engineering and machine learning. We evaluate and contrast the identification performance of widely-used machine learning models within our framework, including k-Nearest-Neighbors, Naïve Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression. Our framework was conducted on 300 patient samples (161 cases, 60 controls and 79 unconfirmed subjects), randomly selected from 23,281 diabetes related cohort retrieved from a regional distributed EHR repository ranging from 2012 to 2014. We apply top-performing machine learning algorithms on the engineered features. We benchmark and contrast the accuracy, precision, AUC, sensitivity and specificity of classification models against the state-of-the-art expert algorithm for identification of T2DM subjects. Our results indicate that the framework achieved high identification performances (∼0.98 in average AUC), which are much higher than the state-of-the-art algorithm (0.71 in AUC). Expert algorithm-based identification of T2DM subjects from EHR is often hampered by the high missing rates due to their conservative selection criteria. Our framework leverages machine learning and feature engineering to loosen such selection criteria to achieve a high identification rate of cases and controls. Our proposed framework demonstrates a more accurate and efficient approach for identifying subjects with and without T2DM from EHR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ji, Renjie; Liu, Yonghong; Diao, Ruiqiang; Xu, Chenchen; Li, Xiaopeng; Cai, Baoping; Zhang, Yanzhen
2014-01-01
Engineering ceramics have been widely used in modern industry for their excellent physical and mechanical properties, and they are difficult to machine owing to their high hardness and brittleness. Electrical discharge machining (EDM) is the appropriate process for machining engineering ceramics provided they are electrically conducting. However, the electrical resistivity of the popular engineering ceramics is higher, and there has been no research on the relationship between the EDM parameters and the electrical resistivity of the engineering ceramics. This paper investigates the effects of the electrical resistivity and EDM parameters such as tool polarity, pulse interval, and electrode material, on the ZnO/Al2O3 ceramic's EDM performance, in terms of the material removal rate (MRR), electrode wear ratio (EWR), and surface roughness (SR). The results show that the electrical resistivity and the EDM parameters have the great influence on the EDM performance. The ZnO/Al2O3 ceramic with the electrical resistivity up to 3410 Ω·cm can be effectively machined by EDM with the copper electrode, the negative tool polarity, and the shorter pulse interval. Under most machining conditions, the MRR increases, and the SR decreases with the decrease of electrical resistivity. Moreover, the tool polarity, and pulse interval affect the EWR, respectively, and the electrical resistivity and electrode material have a combined effect on the EWR. Furthermore, the EDM performance of ZnO/Al2O3 ceramic with the electrical resistivity higher than 687 Ω·cm is obviously different from that with the electrical resistivity lower than 687 Ω·cm, when the electrode material changes. The microstructure character analysis of the machined ZnO/Al2O3 ceramic surface shows that the ZnO/Al2O3 ceramic is removed by melting, evaporation and thermal spalling, and the material from the working fluid and the graphite electrode can transfer to the workpiece surface during electrical discharge machining ZnO/Al2O3 ceramic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollstein, T.; Pfeiffer, W.; Rombach, M.
1996-12-31
The cost for final machining covers a significant percentage of the whole cost of a ceramic component. This is due to the difficult machining of the high performance ceramics. The high values of hardness and wear resistance, which are desired in many applications, hinder the process of machining. Only a few machining procedures are applicable to engineering ceramics e.g. grinding, polishing or ultrasonic lapping, and the rate of material removal is considerably lower than for metals. In addition crack generation in the surface regions during machining is easily possible due to the brittleness of the ceramics. The material removal duringmore » grinding, which is the most important machining procedure of engineering ceramics, takes place mainly by brittle fracture processes but also by ductile material removal. The complex stress conditions in the work piece below or in the vicinity of the grinding grits lead to a variability of cracks and crack systems like median cracks, lateral cracks or radial cracks, which extend in general {le} 50 {mu}m and which lead to the strength anisotropy of ground ceramics, if certain grinding parameters are used e.g..« less
Micro Fluidic Channel Machining on Fused Silica Glass Using Powder Blasting
Jang, Ho-Su; Cho, Myeong-Woo; Park, Dong-Sam
2008-01-01
In this study, micro fluid channels are machined on fused silica glass via powder blasting, a mechanical etching process, and the machining characteristics of the channels are experimentally evaluated. In the process, material removal is performed by the collision of micro abrasives injected by highly compressed air on to the target surface. This approach can be characterized as an integration of brittle mode machining based on micro crack propagation. Fused silica glass, a high purity synthetic amorphous silicon dioxide, is selected as a workpiece material. It has a very low thermal expansion coefficient and excellent optical qualities and exceptional transmittance over a wide spectral range, especially in the ultraviolet range. The powder blasting process parameters affecting the machined results are injection pressure, abrasive particle size and density, stand-off distance, number of nozzle scanning, and shape/size of the required patterns. In this study, the influence of the number of nozzle scanning, abrasive particle size, and pattern size on the formation of micro channels is investigated. Machined shapes and surface roughness are measured using a 3-dimensional vision profiler and the results are discussed. PMID:27879730
Chip morphology as a performance predictor during high speed end milling of soda lime glass
NASA Astrophysics Data System (ADS)
Bagum, M. N.; Konneh, M.; Abdullah, K. A.; Ali, M. Y.
2018-01-01
Soda lime glass has application in DNA arrays and lab on chip manufacturing. Although investigation revealed that machining of such brittle material is possible using ductile mode under controlled cutting parameters and tool geometry, it remains a challenging task. Furthermore, ability of ductile machining is usually assed through machined surface texture examination. Soda lime glass is a strain rate and temperature sensitive material. Hence, influence on attainment of ductile surface due to adiabatic heat generated during high speed end milling using uncoated tungsten carbide tool is investigated in this research. Experimental runs were designed using central composite design (CCD), taking spindle speed, feed rate and depth of cut as input variable and tool-chip contact point temperature (Ttc) and the surface roughness (Rt) as responses. Along with machined surface texture, Rt and chip morphology was examined to assess machinability of soda lime glass. The relation between Ttc and chip morphology was examined. Investigation showed that around glass transition temperature (Tg) ductile chip produced and subsequently clean and ductile final machined surface produced.
NASA Astrophysics Data System (ADS)
Zhang, Guan-Jun; Zhao, Wen-Bin; Ma, Xin-Pei; Li, Guang-Xin; Ma, Kui; Zheng, Nan; Yan, Zhang
Ceramic material has been widely used as insulator in vacuum. Their high hardness and brittle property brings some difficulty in the application. A new kind of machinable ceramic was invented recently. The ceramic can be machined easily and accurately after being sintered, which provides the possibility of making the insulator with fine and complicated configuration. The paper studies its surface insulation performance and flashover phenomena under pulsed excitation in vacuum. The ceramic samples with different crystallization parameters are tested under the vacuum level of 10-4 Pa. The machinable ceramic behaves better surface insulation performance than comparative the Al2O3 and glass sample. The effect of crystallization level on the trap density and flashover current is also presented. After flashover shots many times, the surface microscopic patterns of different samples are observed to investigate the damage status, which can be explained by the thermal damage mechanism.
Clock Agreement Among Parallel Supercomputer Nodes
Jones, Terry R.; Koenig, Gregory A.
2014-04-30
This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.
Research on bearing fault diagnosis of large machinery based on mathematical morphology
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.
Checkpoint repair for high-performance out-of-order execution machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwu, W.M.W.; Patt, Y.N.
Out-or-order execution and branch prediction are two mechanisms that can be used profitably in the design of supercomputers to increase performance. Proper exception handling and branch prediction miss handling in an out-of-order execution machine to require some kind of repair mechanism which can restore the machine to a known previous state. In this paper the authors present a class of repair mechanisms using the concept of checkpointing. The authors derive several properties of checkpoint repair mechanisms. In addition, they provide algorithms for performing checkpoint repair that incur little overhead in time and modest cost in hardware, which also require nomore » additional complexity or time for use with write-back cache memory systems than they do with write-through cache memory systems, contrary to statements made by previous researchers.« less
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
Development of a low energy micro sheet forming machine
NASA Astrophysics Data System (ADS)
Razali, A. R.; Ann, C. T.; Shariff, H. M.; Kasim, N. I.; Musa, M. A.; Ahmad, A. F.
2017-10-01
It is expected that with the miniaturization of materials being processed, energy consumption is also being `miniaturized' proportionally. The focus of this study was to design a low energy micro-sheet-forming machine for thin sheet metal application and fabricate a low direct current powered micro-sheet-forming machine. A prototype of low energy system for a micro-sheet-forming machine which includes mechanical and electronic elements was developed. The machine was tested for its performance in terms of natural frequency, punching forces, punching speed and capability, energy consumption (single punch and frequency-time based). Based on the experiments, the machine can do 600 stroke per minute and the process is unaffected by the machine's natural frequency. It was also found that sub-Joule of power was required for a single stroke of punching/blanking process. Up to 100micron thick carbon steel shim was successfully tested and punched. It concludes that low power forming machine is feasible to be developed and be used to replace high powered machineries to form micro-products/parts.
NASA Astrophysics Data System (ADS)
Nur, Rusdi; Suyuti, Muhammad Arsyad; Susanto, Tri Agus
2017-06-01
Aluminum is widely utilized in the industrial sector. There are several advantages of aluminum, i.e. good flexibility and formability, high corrosion resistance and electrical conductivity, and high heat. Despite of these characteristics, however, pure aluminum is rarely used because of its lacks of strength. Thus, most of the aluminum used in the industrial sectors was in the form of alloy form. Sustainable machining can be considered to link with the transformation of input materials and energy/power demand into finished goods. Machining processes are responsible for environmental effects accepting to their power consumption. The cutting conditions have been optimized to minimize the cutting power, which is the power consumed for cutting. This paper presents an experimental study of sustainable machining of Al-11%Si base alloy that was operated without any cooling system to assess the capacity in reducing power consumption. The cutting force was measured and the cutting power was calculated. Both of cutting force and cutting power were analyzed and modeled by using the central composite design (CCD). The result of this study indicated that the cutting speed has an effect on machining performance and that optimum cutting conditions have to be determined, while sustainable machining can be followed in terms of minimizing power consumption and cutting force. The model developed from this study can be used for evaluation process and optimization to determine optimal cutting conditions for the performance of the whole process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolbert, Leon M; Lee, Seong T
2010-01-01
This paper shows how to maximize the effect of the slanted air-gap structure of an interior permanent magnet synchronous motor with brushless field excitation (BFE) for application in a hybrid electric vehicle. The BFE structure offers high torque density at low speed and weakened flux at high speed. The unique slanted air-gap is intended to increase the output torque of the machine as well as to maximize the ratio of the back-emf of a machine that is controllable by BFE. This irregularly shaped air-gap makes a flux barrier along the d-axis flux path and decreases the d-axis inductance; as amore » result, the reluctance torque of the machine is much higher than a uniform air-gap machine, and so is the output torque. Also, the machine achieves a higher ratio of the magnitude of controllable back-emf. The determination of the slanted shape was performed by using magnetic equivalent circuit analysis and finite element analysis (FEA).« less
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
NASA Astrophysics Data System (ADS)
Miyazato, Itsuki; Tanaka, Yuzuru; Takahashi, Keisuke
2018-02-01
Two-dimensional (2D) magnets are explored in terms of data science and first principle calculations. Machine learning determines four descriptors for predicting the magnetic moments of 2D materials within reported 216 2D materials data. With the trained machine, 254 2D materials are predicted to have high magnetic moments. First principle calculations are performed to evaluate the predicted 254 2D materials where eight undiscovered stable 2D materials with high magnetic moments are revealed. The approach taken in this work indicates that undiscovered materials can be surfaced by utilizing data science and materials data, leading to an innovative way of discovering hidden materials.
Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination
NASA Technical Reports Server (NTRS)
Zenowich, Brian; Crowell, Adam; Townsend, William T.
2013-01-01
The design of machines that rely on arrays of servomotors such as robotic arms, orbital platforms, and combinations of both, imposes a heavy computational burden to coordinate their actions to perform coherent tasks. For example, the robotic equivalent of a person tracing a straight line in space requires enormously complex kinematics calculations, and complexity increases with the number of servo nodes. A new high-level architecture for coordinated servo-machine control enables a practical, distributed transputer alternative to conventional central processor electronics. The solution is inherently scalable, dramatically reduces bulkiness and number of conductor runs throughout the machine, requires only a fraction of the power, and is designed for cooling in a vacuum.
Study on on-machine defects measuring system on high power laser optical elements
NASA Astrophysics Data System (ADS)
Luo, Chi; Shi, Feng; Lin, Zhifan; Zhang, Tong; Wang, Guilin
2017-10-01
The influence of surface defects on high power laser optical elements will cause some harm to the performances of imaging system, including the energy consumption and the damage of film layer. To further increase surface defects on high power laser optical element, on-machine defects measuring system was investigated. Firstly, the selection and design are completed by the working condition analysis of the on-machine defects detection system. By designing on processing algorithms to realize the classification recognition and evaluation of surface defects. The calibration experiment of the scratch was done by using the self-made standard alignment plate. Finally, the detection and evaluation of surface defects of large diameter semi-cylindrical silicon mirror are realized. The calibration results show that the size deviation is less than 4% that meet the precision requirement of the detection of the defects. Through the detection of images the on-machine defects detection system can realize the accurate identification of surface defects.
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Study on electroplating technology of diamond tools for machining hard and brittle materials
NASA Astrophysics Data System (ADS)
Cui, Ying; Chen, Jian Hua; Sun, Li Peng; Wang, Yue
2016-10-01
With the development of the high speed cutting, the ultra-precision machining and ultrasonic vibration technique in processing hard and brittle material , the requirement of cutting tools is becoming higher and higher. As electroplated diamond tools have distinct advantages, such as high adaptability, high durability, long service life and good dimensional stability, the cutting tools are effective and extensive used in grinding hard and brittle materials. In this paper, the coating structure of electroplating diamond tool is described. The electroplating process flow is presented, and the influence of pretreatment on the machining quality is analyzed. Through the experimental research and summary, the reasonable formula of the electrolyte, the electroplating technologic parameters and the suitable sanding method were determined. Meanwhile, the drilling experiment on glass-ceramic shows that the electroplating process can effectively improve the cutting performance of diamond tools. It has laid a good foundation for further improving the quality and efficiency of the machining of hard and brittle materials.
Yun, Seok Hyeon; Park, Sang Jin; Sim, Chang Sun; Sung, Joo Hyun; Kim, Ahra; Lee, Jang Myeong; Lee, Sang Hyun; Lee, Jiho
2017-01-01
Recently, noise coming from the neighborhood via floor wall has become a great social problem. The noise between the floors can be a cause of physical and psychological problems, and the different types of floor impact sound (FIS) may have the different effects on the human's body and mind. The purpose of this study is to assess the responses of subjective feeling, task performance ability, cortisol and HRV for the various types of floor impact. Ten men and 5 women were enrolled in our study, and the English listening test was performed under the twelve different types of FIS, which were made by the combinations of bang machine (B), tapping machine (T), impact ball (I) and sound-proof mattress (M). The 15 subjects were exposed to each FIS for about 3 min, and the subjective annoyance, performance ability (English listening test), cortisol level of urine/saliva and heart rate variability (HRV) were examined. The sound pressure level (SPL) and frequency of FIS were analyzed. Repeated-measures ANOVA, paired t-test, Wilcoxon signed rank test were performed for data analysis. The SPL of tapping machine (T) was reduced with the soundproof mattress (M) by 3.9-7.3 dBA. Impact ball (I) was higher than other FIS in low frequency (31.5-125 Hz) by 10 dBA, and tapping machine (T) was higher than other FIS in high frequency (2-4 k Hz) by 10 dBA. The subjective annoyance is highest in the combination of bang machine and tapping machine (BT), and next in the tapping machine (T). The English listening score was also lowest in the BT, and next in T. The difference of salivary cortisol levels between various types of FIS was significant ( p = 0.003). The change of HRV parameters by the change of FIS types was significant in some parameters, which were total power (TP) ( p = 0.004), low frequency (LF) ( p = 0.002) and high frequency (HF) ( p = 0.011). These results suggest that the human's subjective and objective responses were different according to FIS types and those combinations.
Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.
2018-01-01
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331
Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J
2018-03-14
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.
Smart Screening System (S3) In Taconite Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daryoush Allaei; Angus Morison; David Tarnowski
2005-09-01
The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the process of FE model validation and correlation with experimental data in terms of dynamic performance and predicted stresses. It also detailed efforts into making the supporting structure less important to system performance. Finally, an introduction into the dry application concept was presented. Since then, the design refinement phase was completed. This has resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. Furthermore, this system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota.« less
NASA Technical Reports Server (NTRS)
Phillips, Jennifer K.
1995-01-01
Two of the current and most popular implementations of the Message-Passing Standard, Message Passing Interface (MPI), were contrasted: MPICH by Argonne National Laboratory, and LAM by the Ohio Supercomputer Center at Ohio State University. A parallel skyline matrix solver was adapted to be run in a heterogeneous environment using MPI. The Message-Passing Interface Forum was held in May 1994 which lead to a specification of library functions that implement the message-passing model of parallel communication. LAM, which creates it's own environment, is more robust in a highly heterogeneous network. MPICH uses the environment native to the machine architecture. While neither of these free-ware implementations provides the performance of native message-passing or vendor's implementations, MPICH begins to approach that performance on the SP-2. The machines used in this study were: IBM RS6000, 3 Sun4, SGI, and the IBM SP-2. Each machine is unique and a few machines required specific modifications during the installation. When installed correctly, both implementations worked well with only minor problems.
Solti, Imre; Cooke, Colin R; Xia, Fei; Wurfel, Mark M
2009-11-01
This paper compares the performance of keyword and machine learning-based chest x-ray report classification for Acute Lung Injury (ALI). ALI mortality is approximately 30 percent. High mortality is, in part, a consequence of delayed manual chest x-ray classification. An automated system could reduce the time to recognize ALI and lead to reductions in mortality. For our study, 96 and 857 chest x-ray reports in two corpora were labeled by domain experts for ALI. We developed a keyword and a Maximum Entropy-based classification system. Word unigram and character n-grams provided the features for the machine learning system. The Maximum Entropy algorithm with character 6-gram achieved the highest performance (Recall=0.91, Precision=0.90 and F-measure=0.91) on the 857-report corpus. This study has shown that for the classification of ALI chest x-ray reports, the machine learning approach is superior to the keyword based system and achieves comparable results to highest performing physician annotators.
Solti, Imre; Cooke, Colin R.; Xia, Fei; Wurfel, Mark M.
2010-01-01
This paper compares the performance of keyword and machine learning-based chest x-ray report classification for Acute Lung Injury (ALI). ALI mortality is approximately 30 percent. High mortality is, in part, a consequence of delayed manual chest x-ray classification. An automated system could reduce the time to recognize ALI and lead to reductions in mortality. For our study, 96 and 857 chest x-ray reports in two corpora were labeled by domain experts for ALI. We developed a keyword and a Maximum Entropy-based classification system. Word unigram and character n-grams provided the features for the machine learning system. The Maximum Entropy algorithm with character 6-gram achieved the highest performance (Recall=0.91, Precision=0.90 and F-measure=0.91) on the 857-report corpus. This study has shown that for the classification of ALI chest x-ray reports, the machine learning approach is superior to the keyword based system and achieves comparable results to highest performing physician annotators. PMID:21152268
NASA Astrophysics Data System (ADS)
Jabbari, Ali
2018-01-01
Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
Aggregation of Electric Current Consumption Features to Extract Maintenance KPIs
NASA Astrophysics Data System (ADS)
Simon, Victor; Johansson, Carl-Anders; Galar, Diego
2017-09-01
All electric powered machines offer the possibility of extracting information and calculating Key Performance Indicators (KPIs) from the electric current signal. Depending on the time window, sampling frequency and type of analysis, different indicators from the micro to macro level can be calculated for such aspects as maintenance, production, energy consumption etc. On the micro-level, the indicators are generally used for condition monitoring and diagnostics and are normally based on a short time window and a high sampling frequency. The macro indicators are normally based on a longer time window with a slower sampling frequency and are used as indicators for overall performance, cost or consumption. The indicators can be calculated directly from the current signal but can also be based on a combination of information from the current signal and operational data like rpm, position etc. One or several of those indicators can be used for prediction and prognostics of a machine's future behavior. This paper uses this technique to calculate indicators for maintenance and energy optimization in electric powered machines and fleets of machines, especially machine tools.
Performance and Surface Integrity of Ti6Al4V After Sinking EDM with Special Graphite Electrodes
NASA Astrophysics Data System (ADS)
Amorim, Fred L.; Stedile, Leandro J.; Torres, Ricardo D.; Soares, Paulo C.; Henning Laurindo, Carlos A.
2014-04-01
Titanium and its alloys have high chemical reactivity with most of the cutting tools. This makes it difficult to work with these alloys using conventional machining processes. Electrical discharge machining (EDM) emerges as an alternative technique to machining these materials. In this work, it is investigated the performance of three special grades of graphite as electrodes when ED-Machining Ti6Al4V samples under three different regimes. The main influences of electrical parameters are discussed for the samples material removal rate, volumetric relative wear and surface roughness. The samples surfaces were evaluated using SEM images, microhardness measurements, and x-ray diffraction. It was found that the best results for samples material removal rate, surface roughness, and volumetric relative wear were obtained for the graphite electrode with 10-μm particle size and negative polarity. For all samples machined by EDM and characterized by x-ray (XRD), it was identified the presence of titanium carbides. For the finish EDM regimes, the recast layer presents an increased amount of titanium carbides compared to semi-finish and rough regimes.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
An assessment of support vector machines for land cover classification
Huang, C.; Davis, L.S.; Townshend, J.R.G.
2002-01-01
The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.
Venkatesan, K
2017-07-01
Inconel 718, a high-temperature alloy, is a promising material for high-performance aerospace gas turbine engines components. However, the machining of the alloy is difficult owing to immense shear strength, rapid work hardening rate during turning, and less thermal conductivity. Hence, like ceramics and composites, the machining of this alloy is considered as difficult-to-turn materials. Laser assisted turning method has become a promising solution in recent years to lessen cutting stress when materials that are considered difficult-to-turn, such as Inconel 718 is employed. This study investigated the influence of input variables of laser assisted machining on the machinability aspect of the Inconel 718. The comparison of machining characteristics has been carried out to analyze the process benefits with the variation of laser machining variables. The laser assisted machining variables are cutting speeds of 60-150 m/min, feed rates of 0.05-0.125 mm/rev with a laser power between 1200 W and 1300 W. The various output characteristics such as force, roughness, tool life and geometrical characteristic of chip are investigated and compared with conventional machining without application of laser power. From experimental results, at a laser power of 1200 W, laser assisted turning outperforms conventional machining by 2.10 times lessening in cutting force, 46% reduction in surface roughness as well as 66% improvement in tool life when compared that of conventional machining. Compared to conventional machining, with the application of laser, the cutting speed of carbide tool has increased to a cutting condition of 150 m/min, 0.125 mm/rev. Microstructural analysis shows that no damage of the subsurface of the workpiece.
NASA Astrophysics Data System (ADS)
Iannitti, Gianluca; Bonora, Nicola; Gentile, Domenico; Ruggiero, Andrew; Testa, Gabriel; Gubbioni, Simone
2017-06-01
In this work, the mechanical behavior of Ti-6Al-4V obtained by additive manufacturing technique was investigated, also considering the build direction. Dog-bone shaped specimens and Taylor cylinders were machined from rods manufactured by means of the EOSSINT M2 80 machine, based on Direct Metal Laser Sintering technique. Tensile tests were performed at strain rate ranging from 5E-4 s-1 to 1000 s-1 using an Instron electromechanical machine for quasistatic tests and a Direct-Tension Split Hopkinson Bar for dynamic tests. The mechanical strength of the material was described by a Johnson-Cook model modified to account for stress saturation occurring at high strain. Taylor cylinder tests and their corresponding numerical simulations were carried out in order to validate the constitutive model under a complex deformation path, high strain rates, and high temperatures.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
Thermal Management and Reliability of Automotive Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E
Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Fu, Haohuan
2014-08-16
Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Different Techniques For Producing Precision Holes (>20 mm) In Hardened Steel—Comparative Results
NASA Astrophysics Data System (ADS)
Coelho, R. T.; Tanikawa, S. T.
2009-11-01
High speed machining (HSM), or high performance machining, has been one of the most recent technological advances. When applied to milling operations, using adequate machines, CAM programs and tooling, it allows cutting hardened steels, which was not feasible just a couple of years ago. The use of very stiff and precision machines has created the possibilities of machining holes in hardened steels, such as AISI H13 with 48-50 HRC, using helical interpolations, for example. Such process is particularly useful for holes with diameter bigger than normal solid carbide drills commercially available, around 20 mm, or higher. Such holes may need narrow tolerances, fine surface finishing, which can be obtained just by end milling operations. The present work compares some of the strategies used to obtain such holes by end milling, and also some techniques employed to finish them, by milling, boring and also by fine grinding at the same machine. Results indicate that it is possible to obtain holes with less than 0.36 m in circularity, 7.41 m in cylindricity and 0.12 m in surface roughness Ra. Additionally, there is less possibilities of obtaining heat affected layers when using such technique.
NASA Astrophysics Data System (ADS)
Xu, Peifeng; Shi, Kai; Sun, Yuxin; Zhua, Huangqiu
2017-05-01
Dual rotor permanent magnet (DRPM) wind power generator using ferrite magnets has the advantages of low cost, high efficiency, and high torque density. How to further improve the performance and reduce the cost of the machine by proper choice of pole number and slot number is an important problem to be solved when performing preliminarily design a DRPM wind generator. This paper presents a comprehensive performance comparison of a DRPM wind generator using ferrite magnets with different slot and pole number combinations. The main winding factors are calculated by means of the star of slots. Under the same machine volume and ferrite consumption, the flux linkage, back-electromotive force (EMF), cogging torque, output torque, torque pulsation, and losses are investigated and compared using finite element analysis (FEA). The results show that the slot and pole number combinations have an important impact on the generator properties.
Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox
NASA Astrophysics Data System (ADS)
Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas
In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.
Man/Machine Interaction Dynamics And Performance (MMIDAP) capability
NASA Technical Reports Server (NTRS)
Frisch, Harold P.
1991-01-01
The creation of an ability to study interaction dynamics between a machine and its human operator can be approached from a myriad of directions. The Man/Machine Interaction Dynamics and Performance (MMIDAP) project seeks to create an ability to study the consequences of machine design alternatives relative to the performance of both machine and operator. The class of machines to which this study is directed includes those that require the intelligent physical exertions of a human operator. While Goddard's Flight Telerobotic's program was expected to be a major user, basic engineering design and biomedical applications reach far beyond telerobotics. Ongoing efforts are outlined of the GSFC and its University and small business collaborators to integrate both human performance and musculoskeletal data bases with analysis capabilities necessary to enable the study of dynamic actions, reactions, and performance of coupled machine/operator systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunshah, R.F.; Shabaik, A.H.
The process of Activated Reactive Evaporation is used to synthesize superhard materials like carbides, oxides, nitrides and ultrafine grain cermets. The deposits are characterized by hardness, microstructure, microprobe analysis for chemistry and lattice parameter measurements. The synthesis and characterization of TiC-Ni cermets and Al/sub 2/O/sub 3/ are given. High speed steel tool coated with TiC, TiC-Ni and TaC are tested for machining performance at different speeds and feeds. The machining evaluation and the selection of coatings is based on the rate of deterioration of the coating tool temperature, and cutting forces. Tool life tests show coated high speed steel toolsmore » having 150 to 300% improvement in tool life compared to uncoated tools. Variability in the quality of the ground edge on high speed steel inserts produce a great scatter in the machining evaluation data.« less
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
An application of eddy current damping effect on single point diamond turning of titanium alloys
NASA Astrophysics Data System (ADS)
Yip, W. S.; To, S.
2017-11-01
Titanium alloys Ti6Al4V (TC4) have been popularly applied in many industries. They have superior material properties including an excellent strength-to-weight ratio and corrosion resistance. However, they are regarded as difficult to cut materials; serious tool wear, a high level of cutting vibration and low surface integrity are always involved in machining processes especially in ultra-precision machining (UPM). In this paper, a novel hybrid machining technology using an eddy current damping effect is firstly introduced in UPM to suppress machining vibration and improve the machining performance of titanium alloys. A magnetic field was superimposed on samples during single point diamond turning (SPDT) by exposing the samples in between two permanent magnets. When the titanium alloys were rotated within a magnetic field in the SPDT, an eddy current was generated through a stationary magnetic field inside the titanium alloys. An eddy current generated its own magnetic field with the opposite direction of the external magnetic field leading a repulsive force, compensating for the machining vibration induced by the turning process. The experimental results showed a remarkable improvement in cutting force variation, a significant reduction in adhesive tool wear and an extreme long chip formation in comparison to normal SPDT of titanium alloys, suggesting the enhancement of the machinability of titanium alloys using an eddy current damping effect. An eddy current damping effect was firstly introduced in the area of UPM to deliver the results of outstanding machining performance.
Ji, Renjie; Liu, Yonghong; Diao, Ruiqiang; Xu, Chenchen; Li, Xiaopeng; Cai, Baoping; Zhang, Yanzhen
2014-01-01
Engineering ceramics have been widely used in modern industry for their excellent physical and mechanical properties, and they are difficult to machine owing to their high hardness and brittleness. Electrical discharge machining (EDM) is the appropriate process for machining engineering ceramics provided they are electrically conducting. However, the electrical resistivity of the popular engineering ceramics is higher, and there has been no research on the relationship between the EDM parameters and the electrical resistivity of the engineering ceramics. This paper investigates the effects of the electrical resistivity and EDM parameters such as tool polarity, pulse interval, and electrode material, on the ZnO/Al2O3 ceramic's EDM performance, in terms of the material removal rate (MRR), electrode wear ratio (EWR), and surface roughness (SR). The results show that the electrical resistivity and the EDM parameters have the great influence on the EDM performance. The ZnO/Al2O3 ceramic with the electrical resistivity up to 3410 Ω·cm can be effectively machined by EDM with the copper electrode, the negative tool polarity, and the shorter pulse interval. Under most machining conditions, the MRR increases, and the SR decreases with the decrease of electrical resistivity. Moreover, the tool polarity, and pulse interval affect the EWR, respectively, and the electrical resistivity and electrode material have a combined effect on the EWR. Furthermore, the EDM performance of ZnO/Al2O3 ceramic with the electrical resistivity higher than 687 Ω·cm is obviously different from that with the electrical resistivity lower than 687 Ω·cm, when the electrode material changes. The microstructure character analysis of the machined ZnO/Al2O3 ceramic surface shows that the ZnO/Al2O3 ceramic is removed by melting, evaporation and thermal spalling, and the material from the working fluid and the graphite electrode can transfer to the workpiece surface during electrical discharge machining ZnO/Al2O3 ceramic. PMID:25364912
Liu, Rong; Li, Xi; Zhang, Wei; Zhou, Hong-Hao
2015-01-01
Objective Multiple linear regression (MLR) and machine learning techniques in pharmacogenetic algorithm-based warfarin dosing have been reported. However, performances of these algorithms in racially diverse group have never been objectively evaluated and compared. In this literature-based study, we compared the performances of eight machine learning techniques with those of MLR in a large, racially-diverse cohort. Methods MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied in warfarin dose algorithms in a cohort from the International Warfarin Pharmacogenetics Consortium database. Covariates obtained by stepwise regression from 80% of randomly selected patients were used to develop algorithms. To compare the performances of these algorithms, the mean percentage of patients whose predicted dose fell within 20% of the actual dose (mean percentage within 20%) and the mean absolute error (MAE) were calculated in the remaining 20% of patients. The performances of these techniques in different races, as well as the dose ranges of therapeutic warfarin were compared. Robust results were obtained after 100 rounds of resampling. Results BART, MARS and SVR were statistically indistinguishable and significantly out performed all the other approaches in the whole cohort (MAE: 8.84–8.96 mg/week, mean percentage within 20%: 45.88%–46.35%). In the White population, MARS and BART showed higher mean percentage within 20% and lower mean MAE than those of MLR (all p values < 0.05). In the Asian population, SVR, BART, MARS and LAR performed the same as MLR. MLR and LAR optimally performed among the Black population. When patients were grouped in terms of warfarin dose range, all machine learning techniques except ANN and LAR showed significantly higher mean percentage within 20%, and lower MAE (all p values < 0.05) than MLR in the low- and high- dose ranges. Conclusion Overall, machine learning-based techniques, BART, MARS and SVR performed superior than MLR in warfarin pharmacogenetic dosing. Differences of algorithms’ performances exist among the races. Moreover, machine learning-based algorithms tended to perform better in the low- and high- dose ranges than MLR. PMID:26305568
Training Needs for High Performance in the Automotive Industry.
ERIC Educational Resources Information Center
Clyne, Barry; And Others
A project was conducted in Australia to identify the training needs of the emerging industry required to support the development of the high performance areas of the automotive machining and reconditioning field especially as it pertained to auto racing. Data were gathered through a literature search, interviews with experts in the field, and…
Research and development of energy-efficient high back-pressure compressor
NASA Astrophysics Data System (ADS)
1983-09-01
Improved-efficiency compressors were developed in four capacity sizes. Changes to the baseline compressor were made to the motors, valve plates, and mufflers. The adoption of a slower running speed compressor required larger displacements to maintain the desired capacity. This involved both bore and stroke modifications. All changes that were made to the compressor are readily adaptable to manufacture. Prototype compressors were built and tested. The largest capacity size (4000 Btu/h) was selected for testing in a vending machine. Additional testing was performed on the prototype compressors in order to rate them on an alternate refrigerant. A market analysis was performed to determine the potential acceptance of the improved-efficiency machines by a vending machine manufacturer, who supplies a retail sales system of a major soft drink company.
Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)
Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R
2005-01-01
Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992
Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines
2014-11-01
architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges
Running VisIt Software on the Peregrine System | High-Performance Computing
kilobyte range. VisIt features a robust remote visualization capability. VisIt can be started on a local machine and used to visualize data on a remote compute cluster.The remote machine must be able to send VisIt module must be loaded as part of this process. To enable remote visualization the 'module load
Sustainable manufacturing by calculating the energy demand during turning of AISI 1045 steel
NASA Astrophysics Data System (ADS)
Nur, R.; Nasrullah, B.; Suyuti, M. A.; Apollo
2018-01-01
Sustainable development will become important issues for many fields, including production, industry, and manufacturing. In order to achieve sustainable development, industry should be able to perform of sustainable production processes and environmentally friendly. Therefore, there is need to minimize the energy demand in the machining process. This paper presents a calculation method of energy consumption in the machining process, especially turning process which calculated by summing the number of energy consumption, such as the electric energy consumed during the machining preparation, the electrical energy during the cutting processes, and the electrical energy to produce a cutting tool. A case study was performed on dry turning of mild carbon steel using coated carbide. This approach can be used to determine the total amount of electrical energy consumed in the specific machining process. It concluded that the energy consumption will be an increase for using the high cutting speed as well as for the feed rate was increased.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Relative Performance of Hardwood Sawing Machines
Philip H. Steele; Michael W. Wade; Steven H. Bullard; Philip A. Araman
1991-01-01
Only limited information has been available to hardwood sawmillers on the performance of their sawing machines. This study analyzes a large database of individual machine studies to provide detailed information on 6 machine types. These machine types were band headrig, circular headrig, band linebar resaw, vertical band splitter resaw, single arbor gang resaw and...
NASA Astrophysics Data System (ADS)
Boilard, Patrick
Even though powder metallurgy (P/M) is a near net shape process, a large number of parts still require one or more machining operations during the course of their elaboration and/or their finishing. The main objectives of the work presented in this thesis are centered on the elaboration of blends with enhanced machinability, as well as helping with the definition and in the characterization of the machinability of P/M parts. Enhancing machinability can be done in various ways, through the use of machinability additives and by decreasing the amount of porosity of the parts. These different ways of enhancing machinability have been investigated thoroughly, by systematically planning and preparing series of samples in order to obtain valid and repeatable results leading to meaningful conclusions relevant to the P/M domain. Results obtained during the course of the work are divided into three main chapters: (1) the effect of machining parameters on machinability, (2) the effect of additives on machinability, and (3) the development and the characterization of high density parts obtained by liquid phase sintering. Regarding the effect of machining parameters on machinability, studies were performed on parameters such as rotating speed, feed, tool position and diameter of the tool. Optimal cutting parameters are found for drilling operations performed on a standard FC-0208 blend, for different machinability criteria. Moreover, study of material removal rates shows the sensitivity of the machinability criteria for different machining parameters and indicates that thrust force is more regular than tool wear and slope of the drillability curve in the characterization of machinability. The chapter discussing the effect of various additives on machinability reveals many interesting results. First, work carried out on MoS2 additions reveals the dissociation of this additive and the creation of metallic sulphides (namely CuxS sulphides) when copper is present. Results also show that it is possible to reduce the amount of MoS2 in the blend so as to lower the dimensional change and the cost (blend Mo8A), while enhancing machinability and keeping hardness values within the same range (70 HRB). Second, adding enstatite (MgO·SiO2) permits the observation of the mechanisms occurring with the use of this additive. It is found that the stability of enstatite limits the diffusion of graphite during sintering, leading to the presence of free graphite in the pores, thus enhancing machinability. Furthermore, a lower amount of graphite in the matrix leads to a lower hardness, which is also beneficial to machinability. It is also found that the presence of copper enhances the diffusion of graphite, through the formation of a liquid phase during sintering. With the objective of improving machinability by reaching higher densities, blends were developed for densification through liquid phase sintering. High density samples are obtained (>7.5 g/cm3) for blends prepared with Fe-C-P constituents, namely with 0.5%P and 2.4%C. By systematically studying the effect of different parameters, the importance of the chemical composition (mainly the carbon content) and the importance of the sintering cycle (particularly the cooling rate) are demonstrated. Moreover, various heat treatments studied illustrate the different microstructures achievable for this system, showing various amounts of cementite, pearlite and free graphite. Although the machinability is limited for samples containing large amounts of cementite, it can be greatly improved with very slow cooling, leading to graphitization of the carbon in presence of phosphorus. Adequate control of the sintering cycle on samples made from FGS1625 powder leads to the obtention of high density (≥7.0 g/cm 3) microstructures containing various amounts of pearlite, ferrite and free graphite. Obtaining ferritic microstructures with free graphite designed for very high machinability (tool wear <1.0%) or fine pearlitic microstructures with excellent mechanical properties (transverse rupture strength >1600 MPa) is therefore possible. These results show that improvement of machinability through higher densities is limited by microstructure. Indeed, for the studied samples, microstructure is dominant in the determination of machinability, far more important than density, judging by the influence of cementite or of the volume fraction of free graphite on machinability for example. (Abstract shortened by UMI.)
Computer-aided design studies of the homopolar linear synchronous motor
NASA Astrophysics Data System (ADS)
Dawson, G. E.; Eastham, A. R.; Ong, R.
1984-09-01
The linear induction motor (LIM), as an urban transit drive, can provide good grade-climbing capabilities and propulsion/braking performance that is independent of steel wheel-rail adhesion. In view of its 10-12 mm airgap, the LIM is characterized by a low power factor-efficiency product of order 0.4. A synchronous machine offers high efficiency and controllable power factor. An assessment of the linear homopolar configuration of this machine is presented as an alternative to the LIM. Computer-aided design studies using the finite element technique have been conducted to identify a suitable machine design for urban transit propulsion.
Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk
2014-10-20
Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries. This integration improves the efficiency of the clinical diagnostic model research workflow. Moreover, by simplifying the generation of learning curves, CDM enables study coordinators to assess more accurately when data collection can be terminated, resulting in better models or lower patient recruitment costs.
Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel
2016-10-01
Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.
New Technique of High-Performance Torque Control Developed for Induction Machines
NASA Technical Reports Server (NTRS)
Kenny, Barbara H.
2003-01-01
Two forms of high-performance torque control for motor drives have been described in the literature: field orientation control and direct torque control. Field orientation control has been the method of choice for previous NASA electromechanical actuator research efforts with induction motors. Direct torque control has the potential to offer some advantages over field orientation, including ease of implementation and faster response. However, the most common form of direct torque control is not suitable for the highspeed, low-stator-flux linkage induction machines designed for electromechanical actuators with the presently available sample rates of digital control systems (higher sample rates are required). In addition, this form of direct torque control is not suitable for the addition of a high-frequency carrier signal necessary for the "self-sensing" (sensorless) position estimation technique. This technique enables low- and zero-speed position sensorless operation of the machine. Sensorless operation is desirable to reduce the number of necessary feedback signals and transducers, thus improving the reliability and reducing the mass and volume of the system. This research was directed at developing an alternative form of direct torque control known as a "deadbeat," or inverse model, solution. This form uses pulse-width modulation of the voltage applied to the machine, thus reducing the necessary sample and switching frequency for the high-speed NASA motor. In addition, the structure of the deadbeat form allows the addition of the high-frequency carrier signal so that low- and zero-speed sensorless operation is possible. The new deadbeat solution is based on using the stator and rotor flux as state variables. This choice of state variables leads to a simple graphical representation of the solution as the intersection of a constant torque line with a constant stator flux circle. Previous solutions have been expressed only in complex mathematical terms without a method to clearly visualize the solution. The graphical technique allows a more insightful understanding of the operation of the machine under various conditions.
NASA Astrophysics Data System (ADS)
Tillmann, W.; Schaak, C.; Biermann, D.; Aßmuth, R.; Goeke, S.
2017-03-01
Cemented carbide (hard metal) cutting tools are the first choice to machine hard materials or to conduct high performance cutting processes. Main advantages of cemented carbide cutting tools are their high wear resistance (hardness) and good high temperature strength. In contrast, cemented carbide cutting tools are characterized by a low toughness and generate higher production costs, especially due to limited resources. Usually, cemented carbide cutting tools are produced by means of powder metallurgical processes. Compared to conventional manufacturing routes, these processes are more expensive and only a limited number of geometries can be realized. Furthermore, post-processing and preparing the cutting edges in order to achieve high performance tools is often required. In the present paper, an alternative method to substitute solid cemented carbide cutting tools is presented. Cutting tools made of conventional high speed steels (HSS) were coated with thick WC-Co (88/12) layers by means of thermal spraying (HVOF). The challenge is to obtain a dense, homogenous, and near-net-shape coating on the flanks and the cutting edge. For this purpose, different coating strategies were realized using an industrial robot. The coating properties were subsequently investigated. After this initial step, the surfaces of the cutting tools were ground and selected cutting edges were prepared by means of wet abrasive jet machining to achieve a smooth and round micro shape. Machining tests were conducted with these coated, ground and prepared cutting tools. The occurring wear phenomena were analyzed and compared to conventional HSS cutting tools. Overall, the results of the experiments proved that the coating withstands mechanical stresses during machining. In the conducted experiments, the coated cutting tools showed less wear than conventional HSS cutting tools. With respect to the initial wear resistance, additional benefits can be obtained by preparing the cutting edge by means of wet abrasive jet machining.
Cursor control by Kalman filter with a non-invasive body–machine interface
Seáñez-González, Ismael; Mussa-Ivaldi, Ferdinando A
2015-01-01
Objective We describe a novel human–machine interface for the control of a two-dimensional (2D) computer cursor using four inertial measurement units (IMUs) placed on the user’s upper-body. Approach A calibration paradigm where human subjects follow a cursor with their body as if they were controlling it with their shoulders generates a map between shoulder motions and cursor kinematics. This map is used in a Kalman filter to estimate the desired cursor coordinates from upper-body motions. We compared cursor control performance in a centre-out reaching task performed by subjects using different amounts of information from the IMUs to control the 2D cursor. Main results Our results indicate that taking advantage of the redundancy of the signals from the IMUs improved overall performance. Our work also demonstrates the potential of non-invasive IMU-based body–machine interface systems as an alternative or complement to brain–machine interfaces for accomplishing cursor control in 2D space. Significance The present study may serve as a platform for people with high-tetraplegia to control assistive devices such as powered wheelchairs using a joystick. PMID:25242561
Learning algorithms for human-machine interfaces.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2009-05-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.
Learning Algorithms for Human–Machine Interfaces
Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.
2012-01-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886
NASA Astrophysics Data System (ADS)
Bilalic, Rusmir
A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.
NASA Astrophysics Data System (ADS)
Kameda, Takao; Sugino, Naoto; Takei, Satoshi
2016-10-01
Shear viscosity measurement device was produced to evaluate the injection molding workability for high-performance resins. Observation was possible in shear rate from 10 to 10000 [1/sec] that were higher than rotary rheometer by measuring with a plasticization cylinder of the injection molding machine. The result of measurements extrapolated result of a measurement of the rotary rheometer.
175Hp contrarotating homopolar motor design report
NASA Astrophysics Data System (ADS)
Cannell, Michael J.; Drake, John L.; McConnell, Richard A.; Martino, William R.
1994-06-01
A normally conducting contrarotating homopolar motor has been designed and constructed. The reaction torque, in the outer rotor, from the inner rotor is utilized to produce true contrarotation. The machine utilizes liquid cooled conductors, high performance liquid metal current collectors, and ferrous conductors in the active region. The basic machine output is 175 hp at + or - 1,200 rpm with an input of 4 volts and 35,000 amps.
Nanocomposites for Machining Tools
Loginov, Pavel; Mishnaevsky, Leon; Levashov, Evgeny
2017-01-01
Machining tools are used in many areas of production. To a considerable extent, the performance characteristics of the tools determine the quality and cost of obtained products. The main materials used for producing machining tools are steel, cemented carbides, ceramics and superhard materials. A promising way to improve the performance characteristics of these materials is to design new nanocomposites based on them. The application of micromechanical modeling during the elaboration of composite materials for machining tools can reduce the financial and time costs for development of new tools, with enhanced performance. This article reviews the main groups of nanocomposites for machining tools and their performance. PMID:29027926
Research on precision grinding technology of large scale and ultra thin optics
NASA Astrophysics Data System (ADS)
Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua
2018-03-01
The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.
Yamaguchi, Akemi; Matsuda, Kazuyuki; Sueki, Akane; Taira, Chiaki; Uehara, Masayuki; Saito, Yasunori; Honda, Takayuki
2015-08-25
Reverse transcription (RT)-nested polymerase chain reaction (PCR) is a time-consuming procedure because it has several handling steps and is associated with the risk of cross-contamination during each step. Therefore, a rapid and sensitive one-step RT-nested PCR was developed that could be performed in a single tube using a droplet-PCR machine. The K562 BCR-ABL mRNA-positive cell line as well as bone marrow aspirates from 5 patients with chronic myelogenous leukemia (CML) and 5 controls without CML were used. We evaluated one-step RT-nested PCR using the droplet-PCR machine. One-step RT-nested PCR performed in a single tube using the droplet-PCR machine enabled the detection of BCR-ABL mRNA within 40min, which was 10(3)-fold superior to conventional RT nested PCR using three steps in separate tubes. The sensitivity of the one-step RT-nested PCR was 0.001%, with sample reactivity comparable to that of the conventional assay. One-step RT-nested PCR was developed using the droplet-PCR machine, which enabled all reactions to be performed in a single tube accurately and rapidly and with high sensitivity. This one-step RT-nested PCR may be applicable to a wide spectrum of genetic tests in clinical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning
Sutphin, George L.; Mahoney, J. Matthew; Sheppard, Keith; Walton, David O.; Korstanje, Ron
2016-01-01
The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species—humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/. PMID:27812085
WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning.
Sutphin, George L; Mahoney, J Matthew; Sheppard, Keith; Walton, David O; Korstanje, Ron
2016-11-01
The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species-humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/.
Multiple Cylinder Free-Piston Stirling Machinery
NASA Astrophysics Data System (ADS)
Berchowitz, David M.; Kwon, Yong-Rak
In order to improve the specific power of piston-cylinder type machinery, there is a point in capacity or power where an advantage accrues with increasing number of piston-cylinder assemblies. In the case of Stirling machinery where primary energy is transferred across the casing wall of the machine, this consideration is even more important. This is due primarily to the difference in scaling of basic power and the required heat transfer. Heat transfer is found to be progressively limited as the size of the machine increases. Multiple cylinder machines tend to preserve the surface area to volume ratio at more favorable levels. In addition, the spring effect of the working gas in the so-called alpha configuration is often sufficient to provide a high frequency resonance point that improves the specific power. There are a number of possible multiple cylinder configurations. The simplest is an opposed pair of piston-displacer machines (beta configuration). A three-cylinder machine requires stepped pistons to obtain proper volume phase relationships. Four to six cylinder configurations are also possible. A small demonstrator inline four cylinder alpha machine has been built to demonstrate both cooling operation and power generation. Data from this machine verifies theoretical expectations and is used to extrapolate the performance of future machines. Vibration levels are discussed and it is argued that some multiple cylinder machines have no linear component to the casing vibration but may have a nutating couple. Example applications are discussed ranging from general purpose coolers, computer cooling, exhaust heat power extraction and some high power engines.
Energy harvesting using AC machines with high effective pole count
NASA Astrophysics Data System (ADS)
Geiger, Richard Theodore
In this thesis, ways to improve the power conversion of rotating generators at low rotor speeds in energy harvesting applications were investigated. One method is to increase the pole count, which increases the generator back-emf without also increasing the I2R losses, thereby increasing both torque density and conversion efficiency. One machine topology that has a high effective pole count is a hybrid "stepper" machine. However, the large self inductance of these machines decreases their power factor and hence the maximum power that can be delivered to a load. This effect can be cancelled by the addition of capacitors in series with the stepper windings. A circuit was designed and implemented to automatically vary the series capacitance over the entire speed range investigated. The addition of the series capacitors improved the power output of the stepper machine by up to 700%. At low rotor speeds, with the addition of series capacitance, the power output of the hybrid "stepper" was more than 200% that of a similarly sized PMDC brushed motor. Finally, in this thesis a hybrid lumped parameter / finite element model was used to investigate the impact of number, shape and size of the rotor and stator teeth on machine performance. A typical off-the-shelf hybrid stepper machine has significant cogging torque by design. This cogging torque is a major problem in most small energy harvesting applications. In this thesis it was shown that the cogging and ripple torque can be dramatically reduced. These findings confirm that high-pole-count topologies, and specifically the hybrid stepper configuration, are an attractive choice for energy harvesting applications.
Tug-Of-War Model for Two-Bandit Problem
NASA Astrophysics Data System (ADS)
Kim, Song-Ju; Aono, Masashi; Hara, Masahiko
The amoeba of the true slime mold Physarum polycephalum shows high computational capabilities. In the so-called amoeba-based computing, some computing tasks including combinatorial optimization are performed by the amoeba instead of a digital computer. We expect that there must be problems living organisms are good at solving. The “multi-armed bandit problem” would be the one of such problems. Consider a number of slot machines. Each of the machines has an arm which gives a player a reward with a certain probability when pulled. The problem is to determine the optimal strategy for maximizing the total reward sum after a certain number of trials. To maximize the total reward sum, it is necessary to judge correctly and quickly which machine has the highest reward probability. Therefore, the player should explore many machines to gather much knowledge on which machine is the best, but should not fail to exploit the reward from the known best machine. We consider that living organisms follow some efficient method to solve the problem.
Study of the AC machines winding having fractional q
NASA Astrophysics Data System (ADS)
Bespalov, V. Y.; Sidorov, A. O.
2018-02-01
The winding schemes with a fractional numbers of slots per pole and phase q have been known and used for a long time. However, in the literature on the low-noise machines design there are not recommended to use. Nevertheless, fractional q windings have been realized in many applications of special AC electrical machines, allowing to improve their performance, including vibroacoustic one. This paper deals with harmonic analysis of windings having integer and fractional q in permanent magnet synchronous motors, a comparison of their characteristics is performed, frequencies of subharmonics are revealed. Optimal winding pitch design is found giving reduce the amplitudes of subharmonics. Distribution factors for subharmonics, fractional and high-order harmonics are calculated, results analysis is represented, allowing for giving recommendations how to calculate distribution factors for different harmonics when q is fractional.
Smart Screening System (S3) In Taconite Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daryoush Allaei; Ryan Wartman; David Tarnowski
2006-03-01
The conventional screening machines used in processing plants have had undesirable high noise and vibration levels. They also have had unsatisfactorily low screening efficiency, high energy consumption, high maintenance cost, low productivity, and poor worker safety. These conventional vibrating machines have been used in almost every processing plant. Most of the current material separation technology uses heavy and inefficient electric motors with an unbalanced rotating mass to generate the shaking. In addition to being excessively noisy, inefficient, and high-maintenance, these vibrating machines are often the bottleneck in the entire process. Furthermore, these motors, along with the vibrating machines and supportingmore » structure, shake other machines and structures in the vicinity. The latter increases maintenance costs while reducing worker health and safety. The conventional vibrating fine screens at taconite processing plants have had the same problems as those listed above. This has resulted in lower screening efficiency, higher energy and maintenance cost, and lower productivity and workers safety concerns. The focus of this work is on the design of a high performance screening machine suitable for taconite processing plants. SmartScreens{trademark} technology uses miniaturized motors, based on smart materials, to generate the shaking. The underlying technologies are Energy Flow Control{trademark} and Vibration Control by Confinement{trademark}. These concepts are used to direct energy flow and confine energy efficiently and effectively to the screen function. The SmartScreens{trademark} technology addresses problems related to noise and vibration, screening efficiency, productivity, and maintenance cost and worker safety. Successful development of SmartScreens{trademark} technology will bring drastic changes to the screening and physical separation industry. The final designs for key components of the SmartScreens{trademark} have been developed. The key components include smart motor and associated electronics, resonators, and supporting structural elements. It is shown that the smart motors have an acceptable life and performance. Resonator (or motion amplifier) designs are selected based on the final system requirement and vibration characteristics. All the components for a fully functional prototype are fabricated. The development program is on schedule. The last semi-annual report described the completion of the design refinement phase. This phase resulted in a Smart Screen design that meets performance targets both in the dry condition and with taconite slurry flow using PZT motors. This system was successfully demonstrated for the DOE and partner companies at the Coleraine Mineral Research Laboratory in Coleraine, Minnesota. Since then, the fabrication of the dry application prototype (incorporating an electromagnetic drive mechanism and a new deblinding concept) has been completed and successfully tested at QRDC's lab.« less
Kim, Jongin; Lee, Boreom
2018-05-07
Different modalities such as structural MRI, FDG-PET, and CSF have complementary information, which is likely to be very useful for diagnosis of AD and MCI. Therefore, it is possible to develop a more effective and accurate AD/MCI automatic diagnosis method by integrating complementary information of different modalities. In this paper, we propose multi-modal sparse hierarchical extreme leaning machine (MSH-ELM). We used volume and mean intensity extracted from 93 regions of interest (ROIs) as features of MRI and FDG-PET, respectively, and used p-tau, t-tau, and Aβ42 as CSF features. In detail, high-level representation was individually extracted from each of MRI, FDG-PET, and CSF using a stacked sparse extreme learning machine auto-encoder (sELM-AE). Then, another stacked sELM-AE was devised to acquire a joint hierarchical feature representation by fusing the high-level representations obtained from each modality. Finally, we classified joint hierarchical feature representation using a kernel-based extreme learning machine (KELM). The results of MSH-ELM were compared with those of conventional ELM, single kernel support vector machine (SK-SVM), multiple kernel support vector machine (MK-SVM) and stacked auto-encoder (SAE). Performance was evaluated through 10-fold cross-validation. In the classification of AD vs. HC and MCI vs. HC problem, the proposed MSH-ELM method showed mean balanced accuracies of 96.10% and 86.46%, respectively, which is much better than those of competing methods. In summary, the proposed algorithm exhibits consistently better performance than SK-SVM, ELM, MK-SVM and SAE in the two binary classification problems (AD vs. HC and MCI vs. HC). © 2018 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J
2008-01-01
The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yamin, Samuel C; Bejan, Anca; Parker, David L; Xi, Min; Brosseau, Lisa M
2016-08-01
Metal fabrication workers are at high risk for machine-related injury. Apart from amputations, data on factors contributing to this problem are generally absent. Narrative text analysis was performed on workers' compensation claims in order to identify machine-related injuries and determine work tasks involved. Data were further evaluated on the basis of cost per claim, nature of injury, and part of body. From an initial set of 4,268 claims, 1,053 were classified as machine-related. Frequently identified tasks included machine operation (31%), workpiece handling (20%), setup/adjustment (15%), and removing chips (12%). Lacerations to finger(s), hand, or thumb comprised 38% of machine-related injuries; foreign body in the eye accounted for 20%. Amputations were relatively rare but had highest costs per claim (mean $21,059; median $11,998). Despite limitations, workers' compensation data were useful in characterizing machine-related injuries. Improving the quality of data collected by insurers would enhance occupational injury surveillance and prevention efforts. Am. J. Ind. Med. 59:656-664, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
Impact of Advance Rate on Entrapment Risk of a Double-Shielded TBM in Squeezing Ground
NASA Astrophysics Data System (ADS)
Hasanpour, Rohola; Rostami, Jamal; Barla, Giovanni
2015-05-01
Shielded tunnel boring machines (TBMs) can get stuck in squeezing ground due to excessive tunnel convergence under high in situ stress. This typically coincides with extended machine stoppages, when the ground has sufficient time to undergo substantial displacements. Excessive convergence of the ground beyond the designated overboring means ground pressure against the shield and high shield frictional resistance that, in some cases, cannot be overcome by the TBM thrust system. This leads to machine entrapment in the ground, which causes significant delays and requires labor-intensive and risky operations of manual excavation to release the machine. To evaluate the impact of the time factor on the possibility of machine entrapment, a comprehensive 3D finite difference simulation of a double-shielded TBM in squeezing ground was performed. The modeling allowed for observation of the impact of the tunnel advance rate on the possibility of machine entrapment in squeezing ground. For this purpose, the model included rock mass properties related to creep in severe squeezing conditions. This paper offers an overview of the modeling results for a given set of rock mass and TBM parameters, as well as lining characteristics, including the magnitude of displacement and contact forces on shields and ground pressure on segmental lining versus time for different advance rates.
Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-01-01
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282
Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-07-18
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.
Use of IT platform in determination of efficiency of mining machines
NASA Astrophysics Data System (ADS)
Brodny, Jarosław; Tutak, Magdalena
2018-01-01
Determination of effective use of mining devices has very significant meaning for mining enterprises. High costs of their purchase and tenancy cause that these enterprises tend to the best use of possessed technical potential. However, specifics of mining production causes that this process not always proceeds without interferences. Practical experiences show that determination of objective measure of utilization of machine in mining enterprise is not simple. In the paper a proposition for solution of this problem is presented. For this purpose an IT platform and overall efficiency model OEE were used. This model enables to evaluate the machine in a range of its availability performance and quality of product, and constitutes a quantitative tool of TPM strategy. Adapted to the specificity of mining branch the OEE model together with acquired data from industrial automatic system enabled to determine the partial indicators and overall efficiency of tested machines. Studies were performed for a set of machines directly use in coal exploitation process. They were: longwall-shearer and armoured face conveyor, and beam stage loader. Obtained results clearly indicate that degree of use of machines by mining enterprises are unsatisfactory. Use of IT platforms will significantly facilitate the process of registration, archiving and analytical processing of the acquired data. In the paper there is presented methodology of determination of partial indices and total OEE together with a practical example of its application for investigated machines set. Also IT platform was characterized for its construction, function and application.
Ethoscopes: An open platform for high-throughput ethomics.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F
2017-10-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Flexible architecture of data acquisition firmware based on multi-behaviors finite state machine
NASA Astrophysics Data System (ADS)
Arpaia, Pasquale; Cimmino, Pasquale
2016-11-01
A flexible firmware architecture for different kinds of data acquisition systems, ranging from high-precision bench instruments to low-cost wireless transducers networks, is presented. The key component is a multi-behaviors finite state machine, easily configurable to both low- and high-performance requirements, to diverse operating systems, as well as to on-line and batch measurement algorithms. The proposed solution was validated experimentally on three case studies with data acquisition architectures: (i) concentrated, in a high-precision instrument for magnetic measurements at CERN, (ii) decentralized, for telemedicine remote monitoring of patients at home, and (iii) distributed, for remote monitoring of building's energy loss.
2017-06-01
AUTONOMOUS CONTROL AND COLLABORATION (UTACC) HUMAN-MACHINE INTEGRATION MEASURES OF PERFORMANCE AND MEASURES OF EFFECTIVENESS by Thomas A...TACTICAL AUTONOMOUS CONTROL AND COLLABORATION (UTACC) HUMAN-MACHINE INTEGRATION MEASURES OF PERFORMANCE AND MEASURES OF EFFECTIVENESS 5. FUNDING...Tactical Autonomous Control and Collaboration (UTACC) program seeks to integrate Marines and autonomous machines to address the challenges encountered in
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-03-01
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
Li, Zexiao; Liu, Xianlei; Fang, Fengzhou; Zhang, Xiaodong; Zeng, Zhen; Zhu, Linlin; Yan, Ning
2018-03-19
Multi-reflective imaging systems find wide applications in optical imaging and space detection. However, it is faced with difficulties in adjusting the freeform mirrors with high accuracy to guarantee the optical function. Motivated by this, an alignment-free manufacture approach is proposed to machine the optical system. The direct optical performance-guided manufacture route is established without measuring the form error of freeform optics. An analytical model is established to investigate the effects of machine errors to serve the error identification and compensation in machining. Based on the integrated manufactured system, an ingenious self-designed testing configuration is constructed to evaluate the optical performance by directly measuring the wavefront aberration. Experiments are carried out to manufacture a three-mirror anastigmat, surface topographical details and optical performance shows agreement to the designed expectation. The final system works as an off-axis infrared imaging system. Results validate the feasibility of the proposed method to achieve excellent optical application.
Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Ousterhout, John K.; Patterson, David A.
1993-01-01
Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Multiphase complete exchange on Paragon, SP2 and CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
The overhead of interprocessor communication is a major factor in limiting the performance of parallel computer systems. The complete exchange is the severest communication pattern in that it requires each processor to send a distinct message to every other processor. This pattern is at the heart of many important parallel applications. On hypercubes, multiphase complete exchange has been developed and shown to provide optimal performance over varying message sizes. Most commercial multicomputer systems do not have a hypercube interconnect. However, they use special purpose hardware and dedicated communication processors to achieve very high performance communication and can be made to emulate the hypercube quite well. Multiphase complete exchange has been implemented on three contemporary parallel architectures: the Intel Paragon, IBM SP2 and Meiko CS-2. The essential features of these machines are described and their basic interprocessor communication overheads are discussed. The performance of multiphase complete exchange is evaluated on each machine. It is shown that the theoretical ideas developed for hypercubes are also applicable in practice to these machines and that multiphase complete exchange can lead to major savings in execution time over traditional solutions.
You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing
2013-01-01
Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.
Analytical model for force prediction when machining metal matrix composites
NASA Astrophysics Data System (ADS)
Sikder, Snahungshu
Metal Matrix Composites (MMC) offer several thermo-mechanical advantages over standard materials and alloys which make them better candidates in different applications. Their light weight, high stiffness, and strength have attracted several industries such as automotive, aerospace, and defence for their wide range of products. However, the wide spread application of Meal Matrix Composites is still a challenge for industry. The hard and abrasive nature of the reinforcement particles is responsible for rapid tool wear and high machining costs. Fracture and debonding of the abrasive reinforcement particles are the considerable damage modes that directly influence the tool performance. It is very important to find highly effective way to machine MMCs. So, it is important to predict forces when machining Metal Matrix Composites because this will help to choose perfect tools for machining and ultimately save both money and time. This research presents an analytical force model for predicting the forces generated during machining of Metal Matrix Composites. In estimating the generated forces, several aspects of cutting mechanics were considered including: shearing force, ploughing force, and particle fracture force. Chip formation force was obtained by classical orthogonal metal cutting mechanics and the Johnson-Cook Equation. The ploughing force was formulated while the fracture force was calculated from the slip line field theory and the Griffith theory of failure. The predicted results were compared with previously measured data. The results showed very good agreement between the theoretically predicted and experimentally measured cutting forces.
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages
2013-01-02
Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite
Parallel Computational Fluid Dynamics: Current Status and Future Requirements
NASA Technical Reports Server (NTRS)
Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)
1994-01-01
One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.
Automatic Earthquake Detection by Active Learning
NASA Astrophysics Data System (ADS)
Bergen, K.; Beroza, G. C.
2017-12-01
In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimone, James Bradley; Betty, Rita
Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.
Weakly supervised classification in high energy physics
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...
2017-05-01
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Characterizing parallel file-access patterns on a large-scale multiprocessor
NASA Technical Reports Server (NTRS)
Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.
1995-01-01
High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.
Weakly supervised classification in high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less
Performance analysis of a new radial-axial flux machine with SMC cores and ferrite magnets
NASA Astrophysics Data System (ADS)
Liu, Chengcheng; Wang, Youhua; Lei, Gang; Guo, Youguang; Zhu, Jianguo
2017-05-01
Soft magnetic composite (SMC) is a popular material in designing of new 3D flux electrical machines nowadays for it has the merits of isotropic magnetic characteristic, low eddy current loss and high design flexibility over the electric steel. The axial flux machine (AFM) with the extended stator tooth tip both in the radial and circumferential direction is a good example, which has been investigated in the last years. Based on the 3D flux AFM and radial flux machine, this paper proposes a new radial-axial flux machine (RAFM) with SMC cores and ferrite magnets, which has very high torque density though the low cost low magnetic energy ferrite magnet is utilized. Moreover, the cost of RAFM is quite low since the manufacturing cost can be reduced by using the SMC cores and the material cost will be decreased due to the adoption of the ferrite magnets. The 3D finite element method (FEM) is used to calculate the magnetic flux density distribution and electromagnetic parameters. For the core loss calculation, the rotational core loss computation method is used based on the experiment results from previous 3D magnetic tester.
Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N.; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S.; Leswing, Karl
2017-01-01
Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm. PMID:29629118
Plasma Wall interaction in the IGNITOR machine
NASA Astrophysics Data System (ADS)
Ferro, C.
1998-11-01
One of the critical issues in ignited machines is the management of the heat and particle exhaust without degradation of the plasma quality (pollution and confinement time) and without damage of the material facing the plasma. The IGNITOR machine has been conceived as a ``limiter" device, i.e., with the plasma leaning nearly on the entire surface of the first wall. Peak heat loads can easily be maintained at values lower than 1.35 MW/m^2 even considering displacements of the plasma column^1. This ``limiter" choice is based on the operational performances of high density, high field machines which suggests that intrinsic physics processes in the edge of the plasma are effective in spreading heat loads and maintaining the plasma pollution at a low level. The possibility of these operating scenarios has been demonstrated recently by different machines both in limiter and divertor configurations. The basis for the different physical processes that are expected to influence the IGNITOR edge parameters ^2 are discussed and a comparison with the latest experimental results is given. ^1 C. Ferro, G. Franzoni, R. Zanino, ENEA Internal Report RT/ERG/FUS/94/14. ^2 C. Ferro, R. Zanino, J. Nucl. Mater. 543, 176 (1990).
Laser assisted machining: a state of art review
NASA Astrophysics Data System (ADS)
Punugupati, Gurabvaiah; Kandi, Kishore Kumar; Bose, P. S. C.; Rao, C. S. P.
2016-09-01
Difficult-to-cut materials have increasing demand in aerospace and automobile industries due to their high yield stress, high strength to weight ratio, high toughness, high wear resistance, high creep, high corrosion resistivity, ability to retain high strength at high temperature, etc. The machinability of these advanced materials, using conventional methods of machining is typical due to the high temperature and pressure at the cutting zone and tool and properties such as low thermal conductivity, high cutting forces and cutting temperatures makes the materials difficult to machine. Laser assisted machining (LAM) is a new and innovative technique for machining the difficult-to-cut materials. This paper deals with a review on the advances in lasers, tools and the mechanism of machining using LAM and their effects.
A visual programming environment for the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David
1988-01-01
The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.
Process of making cryogenically cooled high thermal performance crystal optics
Kuzay, Tuncer M.
1992-01-01
A method for constructing a cooled optic wherein one or more cavities are milled, drilled or formed using casting or ultrasound laser machining techniques in a single crystal base and filled with porous material having high thermal conductivity at cryogenic temperatures. A non-machined strain-free single crystal can be bonded to the base to produce superior optics. During operation of the cooled optic, N.sub.2 is pumped through the porous material at a sub-cooled cryogenic inlet temperature and with sufficient system pressure to prevent the fluid bulk temperature from reaching saturation.
Process of making cryogenically cooled high thermal performance crystal optics
Kuzay, T.M.
1992-06-23
A method is disclosed for constructing a cooled optic wherein one or more cavities are milled, drilled or formed using casting or ultrasound laser machining techniques in a single crystal base and filled with porous material having high thermal conductivity at cryogenic temperatures. A non-machined strain-free single crystal can be bonded to the base to produce superior optics. During operation of the cooled optic, N[sub 2] is pumped through the porous material at a sub-cooled cryogenic inlet temperature and with sufficient system pressure to prevent the fluid bulk temperature from reaching saturation. 7 figs.
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang
2017-01-01
Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443
Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang
2017-01-01
Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size.
Voice based gender classification using machine learning
NASA Astrophysics Data System (ADS)
Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.
2017-11-01
Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.
A Support Vector Machine-Based Gender Identification Using Speech Signal
NASA Astrophysics Data System (ADS)
Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk
We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.
Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed
2018-02-06
Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.
Experimental Investigation – Magnetic Assisted Electro Discharge Machining
NASA Astrophysics Data System (ADS)
Kesava Reddy, Chirra; Manzoor Hussain, M.; Satyanarayana, S.; Krishna, M. V. S. Murali
2018-04-01
Emerging technology needs advanced machined parts with high strength and temperature resistance, high fatigue life at low production cost with good surface quality to fit into various industrial applications. Electro discharge machine is one of the extensively used machines to manufacture advanced machined parts which cannot be machined by other traditional machine with high precision and accuracy. Machining of DIN 17350-1.2080 (High Carbon High Chromium steel), using electro discharge machining has been discussed in this paper. In the present investigation an effort is made to use permanent magnet at various positions near the spark zone to improve surface quality of the machined surface. Taguchi methodology is used to obtain optimal choice for each machining parameter such as peak current, pulse duration, gap voltage and Servo reference voltage etc. Process parameters have significant influence on machining characteristics and surface finish. Improvement in surface finish is observed when process parameters are set at optimum condition under the influence of magnetic field at various positions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... performance test of one representative magnet wire coating machine for each group of identical or very similar... you complete the performance test of a representative magnet wire coating machine. The requirements in... operations, you may, with approval, conduct a performance test of a single magnet wire coating machine that...
2017-02-01
DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge
Method and apparatus for characterizing and enhancing the dynamic performance of machine tools
Barkman, William E; Babelay, Jr., Edwin F
2013-12-17
Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include dynamic one axis positional accuracy of the machine tool, dynamic cross-axis stability of the machine tool, and dynamic multi-axis positional accuracy of the machine tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
V.T. Krivoshein; A.V. Makarov
The sequence of pushing coke ovens is one of the most important aspects of battery operation. The sequence must satisfy a number of technical and process conditions: (1) achieve maximum heating-wall life by avoiding destructive expansion pressure in freshly charged ovens and during pushing of the finished coke; (2) ensure uniform brickwork temperature and prevent overheating by compensating for the high thermal flux in freshly charged ovens due to accumulated heat in adjacent ovens that are in the second half of the coking cycle; (3) ensure the most favorable working conditions and safety for operating personnel; (4) provide additional opportunitiesmore » for repair personnel to perform various types of work, such as replacing coke-machine rails, without interrupting coal production; (5) perform the maximum number of coke-machine operations simultaneously: pushing, charging, and cleaning doors, frames, and standpipe elbows; and (6) reduce electricity consumption by minimizing idle travel of coke machines.« less
Machine learning strategies for systems with invariance properties
NASA Astrophysics Data System (ADS)
Ling, Julia; Jones, Reese; Templeton, Jeremy
2016-08-01
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Machine learning strategies for systems with invariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
Machine learning strategies for systems with invariance properties
Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan
2016-05-06
Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less
A superconducting homopolar motor and generator—new approaches
NASA Astrophysics Data System (ADS)
Fuger, Rene; Matsekh, Arkadiy; Kells, John; Sercombe, D. B. T.; Guina, Ante
2016-03-01
Homopolar machines were the first continuously running electromechanical converters ever demonstrated but engineering challenges and the rapid development of AC technology prevented wider commercialisation. Recent developments in superconducting, cryogenic and sliding contact technology together with new areas of application have led to a renewed interest in homopolar machines. Some of the advantages of these machines are ripple free constant torque, pure DC operation, high power-to-weight ratio and that rotating magnets or coils are not required. In this paper we present our unique approach to high power and high torque homopolar electromagnetic turbines using specially designed high field superconducting magnets and liquid metal current collectors. The unique arrangement of the superconducting coils delivers a high static drive field as well as effective shielding for the field critical sliding contacts. The novel use of additional shielding coils reduces weight and stray field of the system. Liquid metal current collectors deliver a low resistance, stable and low maintenance sliding contact by using a thin liquid metal layer that fills a circular channel formed by the moving edge of a rotor and surrounded by a conforming stationary channel of the stator. Both technologies are critical to constructing high performance machines. Homopolar machines are pure DC devices that utilise only DC electric and magnetic fields and have no AC losses in the coils or the supporting structure. Guina Energy Technologies has developed, built and tested different motor and generator concepts over the last few years and has combined its experience to develop a new generation of homopolar electromagnetic turbines. This paper summarises the development process, general design parameters and first test results of our high temperature superconducting test motor.
Application of Elements of TPM Strategy for Operation Analysis of Mining Machine
NASA Astrophysics Data System (ADS)
Brodny, Jaroslaw; Tutak, Magdalena
2017-12-01
Total Productive Maintenance (TPM) strategy includes group of activities and actions in order to maintenance machines in failure-free state and without breakdowns thanks to tending limitation of failures, non-planned shutdowns, lacks and non-planned service of machines. These actions are ordered to increase effectiveness of utilization of possessed devices and machines in company. Very significant element of this strategy is connection of technical actions with changes in their perception by employees. Whereas fundamental aim of introduction this strategy is improvement of economic efficiency of enterprise. Increasing competition and necessity of reduction of production costs causes that also mining enterprises are forced to introduce this strategy. In the paper examples of use of OEE model for quantitative evaluation of selected mining devices were presented. OEE model is quantitative tool of TPM strategy and can be the base for further works connected with its introduction. OEE indicator is the product of three components which include availability and performance of the studied machine and the quality of the obtained product. The paper presents the results of the effectiveness analysis of the use of a set of mining machines included in the longwall system, which is the first and most important link in the technological line of coal production. The set of analyzed machines included the longwall shearer, armored face conveyor and cruscher. From a reliability point of view, the analyzed set of machines is a system that is characterized by the serial structure. The analysis was based on data recorded by the industrial automation system used in the mines. This method of data acquisition ensured their high credibility and a full time synchronization. Conclusions from the research and analyses should be used to reduce breakdowns, failures and unplanned downtime, increase performance and improve production quality.
Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839
Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L
2017-01-01
Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.
The Development of a Small High Speed Steam Microturbine Generator System
NASA Astrophysics Data System (ADS)
Alford, Adrian; Nichol, Philip; Frisby, Ben
2015-08-01
The efficient use of energy is paramount in every kind of business today. Steam is a widely used energy source. In many situations steam is generated at high pressures and then reduced in pressure through control valves before reaching point of use. An opportunity was identified to convert some of the energy at the point of pressure reduction into electricity. This can be accomplished using steam turbines driving alternators on large scale systems. To take advantage of a market identified for small scale systems, a microturbine generator was designed based on a small high speed turbo machine. This gave rise to a number of challenges which are described with the solutions adopted. The challenges included aerodynamic design of high efficiency impellers, sealing of a high speed shaft, thrust control and material selection to avoid steam erosion. The machine was packaged with a sophisticated control system to allow connection to the electricity grid. Some of the challenges in packaging the machine are also described. The Spirax Sarco TurboPower has now concluded performance and initial endurance tests which are described with a summary of the results.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
NASA Astrophysics Data System (ADS)
Mohan, Dhanya; Kumar, C. Santhosh
2016-03-01
Predicting the physiological condition (normal/abnormal) of a patient is highly desirable to enhance the quality of health care. Multi-parameter patient monitors (MPMs) using heart rate, arterial blood pressure, respiration rate and oxygen saturation (S pO2) as input parameters were developed to monitor the condition of patients, with minimum human resource utilization. The Support vector machine (SVM), an advanced machine learning approach popularly used for classification and regression is used for the realization of MPMs. For making MPMs cost effective, we experiment on the hardware implementation of the MPM using support vector machine classifier. The training of the system is done using the matlab environment and the detection of the alarm/noalarm condition is implemented in hardware. We used different kernels for SVM classification and note that the best performance was obtained using intersection kernel SVM (IKSVM). The intersection kernel support vector machine classifier MPM has outperformed the best known MPM using radial basis function kernel by an absoute improvement of 2.74% in accuracy, 1.86% in sensitivity and 3.01% in specificity. The hardware model was developed based on the improved performance system using Verilog Hardware Description Language and was implemented on Altera cyclone-II development board.
The 19 mm date recorders: Similarities and differences
NASA Technical Reports Server (NTRS)
Atkinson, Steve
1991-01-01
Confusion over the use of non-video 19 mm data recorders is becoming more pronounced in the world of high performance computing. The following issues are addressed: (1) the difference between ID-1, ID-2, MIL-STD-2179, and DD-2; (2) the proper machine for the necessary application; and (3) integrating the machine into an existing environment. Also, an attempt is made to clear up any misconceptions there might be about 19 mm tape recorders.
1988-05-01
use of liquid metals for current collectors in homopolar motors and generators has led to the design of machines of superior performance. The steady...In some applications of homopolar generators it becomes necessary not only to start and stop the machines but also to operate them under oscillating...conditions. This could be the case in an application where a homopolar generator behaves as an extremely high energy capacitor. Therefore, one is
Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing
2014-05-01
Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while
Cedar-a large scale multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajski, D.; Kuck, D.; Lawrie, D.
1983-01-01
This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.
Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel
NASA Astrophysics Data System (ADS)
Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.
2018-04-01
Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.
Body-Machine Interfaces after Spinal Cord Injury: Rehabilitation and Brain Plasticity.
Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B; Wang, Xue; Parrish, Todd; Mussa-Ivaldi, Ferdinando A
2016-12-19
The purpose of this study was to identify rehabilitative effects and changes in white matter microstructure in people with high-level spinal cord injury following bilateral upper-extremity motor skill training. Five subjects with high-level (C5-C6) spinal cord injury (SCI) performed five visuo-spatial motor training tasks over 12 sessions (2-3 sessions per week). Subjects controlled a two-dimensional cursor with bilateral simultaneous movements of the shoulders using a non-invasive inertial measurement unit-based body-machine interface. Subjects' upper-body ability was evaluated before the start, in the middle and a day after the completion of training. MR imaging data were acquired before the start and within two days of the completion of training. Subjects learned to use upper-body movements that survived the injury to control the body-machine interface and improved their performance with practice. Motor training increased Manual Muscle Test scores and the isometric force of subjects' shoulders and upper arms. Moreover, motor training increased fractional anisotropy (FA) values in the cingulum of the left hemisphere by 6.02% on average, indicating localized white matter microstructure changes induced by activity-dependent modulation of axon diameter, myelin thickness or axon number. This body-machine interface may serve as a platform to develop a new generation of assistive-rehabilitative devices that promote the use of, and that re-strengthen, the motor and sensory functions that survived the injury.
Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736
Tighe, Patrick J; Lucas, Stephen D; Edwards, David A; Boezaart, André P; Aytug, Haldun; Bihorac, Azra
2012-10-01
The purpose of this project was to determine whether machine-learning classifiers could predict which patients would require a preoperative acute pain service (APS) consultation. Retrospective cohort. University teaching hospital. The records of 9,860 surgical patients posted between January 1 and June 30, 2010 were reviewed. Request for APS consultation. A cohort of machine-learning classifiers was compared according to its ability or inability to classify surgical cases as requiring a request for a preoperative APS consultation. Classifiers were then optimized utilizing ensemble techniques. Computational efficiency was measured with the central processing unit processing times required for model training. Classifiers were tested using the full feature set, as well as the reduced feature set that was optimized using a merit-based dimensional reduction strategy. Machine-learning classifiers correctly predicted preoperative requests for APS consultations in 92.3% (95% confidence intervals [CI], 91.8-92.8) of all surgical cases. Bayesian methods yielded the highest area under the receiver operating curve (0.87, 95% CI 0.84-0.89) and lowest training times (0.0018 seconds, 95% CI, 0.0017-0.0019 for the NaiveBayesUpdateable algorithm). An ensemble of high-performing machine-learning classifiers did not yield a higher area under the receiver operating curve than its component classifiers. Dimensional reduction decreased the computational requirements for multiple classifiers, but did not adversely affect classification performance. Using historical data, machine-learning classifiers can predict which surgical cases should prompt a preoperative request for an APS consultation. Dimensional reduction improved computational efficiency and preserved predictive performance. Wiley Periodicals, Inc.
Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.
Investigation of a less rare-earth permanent-magnet machine with the consequent pole rotor
NASA Astrophysics Data System (ADS)
Bai, Jingang; Liu, Jiaqi; Wang, Mingqiao; Zheng, Ping; Liu, Yong; Gao, Haibo; Xiao, Lijun
2018-05-01
Due to the rising price of rare-earth materials, permanent-magnet (PM) machines in different applications have a trend of reducing the use of rare-earth materials. Since iron-core poles replace half of PM poles in the consequent pole (CP) rotor, the PM machine with CP rotor can be a promising candidate for less rare-earth PM machine. Additionally, the investigation of CP rotor in special electrical machines, like hybrid excitation permanent-magnet PM machine, bearingless motor, etc., has verified the application feasibility of CP rotor. Therefore, this paper focuses on design and performance of PM machines when traditional PM machine uses the CP rotor. In the CP rotor, all the PMs are of the same polarity and they are inserted into the rotor core. Since the fundamental PM flux density depends on the ratio of PM pole to iron-core pole, the combination rule between them is investigated by analytical and finite-element methods. On this basis, to comprehensively analyze and evaluate PM machine with CP rotor, four typical schemes, i.e., integer-slot machines with CP rotor and surface-mounted PM (SPM) rotor, fractional-slot machines with CP rotor and SPM rotor, are designed to investigate the performance of PM machine with CP rotor, including electromagnetic performance, anti-demagnetization capacity and cost.
Ethoscopes: An open platform for high-throughput ethomics
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.
2017-01-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280
Automation and robotics human performance
NASA Technical Reports Server (NTRS)
Mah, Robert W.
1990-01-01
The scope of this report is limited to the following: (1) assessing the feasibility of the assumptions for crew productivity during the intra-vehicular activities and extra-vehicular activities; (2) estimating the appropriate level of automation and robotics to accomplish balanced man-machine, cost-effective operations in space; (3) identifying areas where conceptually different approaches to the use of people and machines can leverage the benefits of the scenarios; and (4) recommending modifications to scenarios or developing new scenarios that will improve the expected benefits. The FY89 special assessments are grouped into the five categories shown in the report. The high level system analyses for Automation & Robotics (A&R) and Human Performance (HP) were performed under the Case Studies Technology Assessment category, whereas the detailed analyses for the critical systems and high leverage development areas were performed under the appropriate operations categories (In-Space Vehicle Operations or Planetary Surface Operations). The analysis activities planned for the Science Operations technology areas were deferred to FY90 studies. The remaining activities such as analytic tool development, graphics/video demonstrations and intelligent communicating systems software architecture were performed under the Simulation & Validations category.
The Cooling and Lubrication Performance of Graphene Platelets in Micro-Machining Environments
NASA Astrophysics Data System (ADS)
Chu, Bryan
The research presented in this thesis is aimed at investigating the use of graphene platelets (GPL) to address the challenges of excessive tool wear, reduced part quality, and high specific power consumption encountered in micro-machining processes. There are two viable methods of introducing GPL into micro-machining environments, viz., the embedded delivery method, where the platelets are embedded into the part being machined, and the external delivery method, where graphene is carried into the cutting zone by jetting or atomizing a carrier fluid. The study involving the embedded delivery method is focused on the micro-machining performance of hierarchical graphene composites. The results of this study show that the presence of graphene in the epoxy matrix improves the machinability of the composite. In general, the tool wear, cutting forces, surface roughness, and extent of delamination are all seen to be lower for the hierarchical composite when compared to the conventional two-phase glass fiber composite. These improvements are attributed to the fact that graphene platelets improve the thermal conductivity of the matrix, provide lubrication at the tool-chip interface and also improve the interface strength between the glass fibers and the matrix. The benefits of graphene are seen to also carry over to the external delivery method. The platelets provide improved cooling and lubrication performance to both environmentally-benign cutting fluids as well as to semi-synthetic cutting fluids used in micro-machining. The cutting performance is seen to be a function of the geometry (i.e., lateral size and thickness) and extent of oxygen-functionalization of the platelet. Ultrasonically exfoliated platelets (with 2--3 graphene layers and lowest in-solution characteristic lateral length of 120 nm) appear to be the most favorable for micro-machining applications. Even at the lowest concentration of 0.1 wt%, they are capable of providing a 51% reduction in the cutting temperature and a 25% reduction in the surface roughness value over that of the baseline semi-synthetic cutting fluid. For the thermally-reduced platelets (with 4--8 graphene layers and in-solution characteristic lateral length of 562--2780 nm), a concentration of 0.2 wt% appears to be optimal. An investigation into the impingement dynamics of the graphene-laden colloidal solutions on a heated substrate reveals that the most important criterion dictating their machining performance is their ability to form uniform, submicron thick films of the platelets upon evaporation of the carrier fluid. As such, the characterization of the residual platelet film left behind on a heated substrate may be an effective technique for evaluating different graphene colloidal solutions for cutting fluids applications in micromachining. Graphene platelets have also recently been shown to reduce the aggressive chemical wear of diamond tools during the machining of transition metal alloys. However, the specific mechanisms responsible for this improvement are currently unknown. The modeling work presented in this thesis uses molecular dynamics techniques to shed light on the wear mitigation mechanisms that are active during the diamond cutting of steel when in the presence of graphene platelets. The dual mechanisms responsible for graphene-induced chemical wear mitigation are: 1) The formation of a physical barrier between the metal and tool atoms, preventing graphitization; and 2) The preferential transfer of carbon from the graphene platelet rather than from the diamond tool. The results of the simulations also provide new insight into the behavior of the 2D graphene platelets in the cutting zone, specifically illustrating the mechanisms of cleaving and interlayer sliding in graphene platelets under the high pressures in cutting zones.
Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon
2016-12-01
This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Ajustement automatique des parametres de coupe pour l'obtention de stabilite dynamique en usinage
NASA Astrophysics Data System (ADS)
Tabet, Ricardo
High speed machining has as principal limitation the dynamic stability of the cutting action which can generate premature wear of the machine spindle and the cutting tool, tool breakage and dimensional errors on the machined part. This phenomenon is known in the literature as chatter and is defined as being self-excited vibrations. This master thesis presents an approach applicable to manufacturing environments that allows eliminating chatter in real time during machining of aerospace aluminum alloys before the damaging effect can occur. A control algorithm is developed in order to detect chatter using a microphone and by analyzing the audio signal in the frequency domain. The analysis allows determining precisely the frequency at which the chatter occurs and therefore, the spindle speed is adjusted in order to make the tooth passing frequency equal to the detected chatter frequency. Also, a new feedrate is determined by keeping a constant chip load and within the physical limits of the cutting tool. The new cutting parameters are then sent out to the machine controller as a command using a communication interface between an external computer and the controller. Multiples experimental tests were conducted to validate the effectiveness to detect and suppress chatter. High speed machining tests, between 15 000 and 33 000 RPM, were performed in order to reflect real conditions for aerospace components manufacturing.
Prediction of mortality after radical cystectomy for bladder cancer by machine learning techniques.
Wang, Guanjin; Lam, Kin-Man; Deng, Zhaohong; Choi, Kup-Sze
2015-08-01
Bladder cancer is a common cancer in genitourinary malignancy. For muscle invasive bladder cancer, surgical removal of the bladder, i.e. radical cystectomy, is in general the definitive treatment which, unfortunately, carries significant morbidities and mortalities. Accurate prediction of the mortality of radical cystectomy is therefore needed. Statistical methods have conventionally been used for this purpose, despite the complex interactions of high-dimensional medical data. Machine learning has emerged as a promising technique for handling high-dimensional data, with increasing application in clinical decision support, e.g. cancer prediction and prognosis. Its ability to reveal the hidden nonlinear interactions and interpretable rules between dependent and independent variables is favorable for constructing models of effective generalization performance. In this paper, seven machine learning methods are utilized to predict the 5-year mortality of radical cystectomy, including back-propagation neural network (BPN), radial basis function (RBFN), extreme learning machine (ELM), regularized ELM (RELM), support vector machine (SVM), naive Bayes (NB) classifier and k-nearest neighbour (KNN), on a clinicopathological dataset of 117 patients of the urology unit of a hospital in Hong Kong. The experimental results indicate that RELM achieved the highest average prediction accuracy of 0.8 at a fast learning speed. The research findings demonstrate the potential of applying machine learning techniques to support clinical decision making. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hardware support for software controlled fast multiplexing of performance counters
Salapura, Valentina; Wisniewski, Robert W
2013-10-01
Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.
Hardware support for software controlled fast multiplexing of performance counters
Salapura, Valentina; Wisniewski, Robert W.
2013-01-01
Performance counters may be operable to collect one or more counts of one or more selected activities, and registers may be operable to store a set of performance counter configurations. A state machine may be operable to automatically select a register from the registers for reconfiguring the one or more performance counters in response to receiving a first signal. The state machine may be further operable to reconfigure the one or more performance counters based on a configuration specified in the selected register. The state machine yet further may be operable to copy data in selected one or more of the performance counters to a memory location, or to copy data from the memory location to the counters, in response to receiving a second signal. The state machine may be operable to store or restore the counter values and state machine configuration in response to a context switch event.
An experimental investigation on orthogonal cutting of hybrid CFRP/Ti stacks
NASA Astrophysics Data System (ADS)
Xu, Jinyang; El Mansori, Mohamed
2016-10-01
Hybrid CFRP/Ti stack has been widely used in the modern aerospace industry owing to its superior mechanical/physical properties and excellent structural functions. Several applications require mechanical machining of these hybrid composite stacks in order to achieve dimensional accuracy and assembly performance. However, machining of such composite-to-metal alliance is usually an extremely challenging task in the manufacturing sectors due to the disparate natures of each stacked constituent and their respective poor machinability. Special issues may arise from the high force/heat generation, severe subsurface damage and rapid tool wear. To study the fundamental mechanisms controlling the bi-material machining, this paper presented an experimental study on orthogonal cutting of hybrid CFRP/Ti stack by using superior polycrystalline diamond (PCD) tipped tools. The utilized cutting parameters for hybrid CFRP/Ti machining were rigorously adopted through a compromise selection due to the disparate machinability behaviors of the CFRP laminate and Ti alloy. The key cutting responses in terms of cutting force generation, machined surface quality and tool wear mechanism were precisely addressed. The experimental results highlighted the involved five stages of CFRP/Ti cutting and the predominant crater wear and edge fracture failure governing the PCD cutting process.
NASA Astrophysics Data System (ADS)
Zhang, P. P.; Guo, Y.; Wang, B.
2017-05-01
The main problems in milling difficult-to-machine materials are the high cutting temperature and rapid tool wear. However it is impossible to investigate tool wear in machining. Tool wear and cutting chip formation are two of the most important representations for machining efficiency and quality. The purpose of this paper is to develop the model of tool wear with cutting chip formation (width of chip and radian of chip) on difficult-to-machine materials. Thereby tool wear is monitored by cutting chip formation. A milling experiment on the machining centre with three sets cutting parameters was performed to obtain chip formation and tool wear. The experimental results show that tool wear increases gradually along with cutting process. In contrast, width of chip and radian of chip decrease. The model is developed by fitting the experimental data and formula transformations. The most of monitored errors of tool wear by the chip formation are less than 10%. The smallest error is 0.2%. Overall errors by the radian of chip are less than the ones by the width of chip. It is new way to monitor and detect tool wear by cutting chip formation in milling difficult-to-machine materials.
NASA Astrophysics Data System (ADS)
Johnson, Kendall B.; Hopkins, Greg
2017-08-01
The Double Arm Linkage precision Linear motion (DALL) carriage has been developed as a simplified, rugged, high performance linear motion stage. Initially conceived as a moving mirror stage for the moving mirror of a Fourier Transform Spectrometer (FTS), it is applicable to any system requiring high performance linear motion. It is based on rigid double arm linkages connecting a base to a moving carriage through flexures. It is a monolithic design. The system is fabricated from one piece of material including the flexural elements, using high precision machining. The monolithic design has many advantages. There are no joints to slip or creep and there are no CTE (coefficient of thermal expansion) issues. This provides a stable, robust design, both mechanically and thermally and is expected to provide a wide operating temperature range, including cryogenic temperatures, and high tolerance to vibration and shock. Furthermore, it provides simplicity and ease of implementation, as there is no assembly or alignment of the mechanism. It comes out of the machining operation aligned and there are no adjustments. A prototype has been fabricated and tested, showing superb shear performance and very promising tilt performance. This makes it applicable to both corner cube and flat mirror FTS systems respectively.
NASA Technical Reports Server (NTRS)
Pettit, R. G.; Wang, J. J.; Toh, C.
2000-01-01
The continual need to reduce airframe cost and the emergence of high speed machining and other manufacturing technologies has brought about a renewed interest in large-scale integral structures for aircraft applications. Applications have been inhibited, however, because of the need to demonstrate damage tolerance, and by cost and manufacturing risks associated with the size and complexity of the parts. The Integral Airframe Structures (IAS) Program identified a feasible integrally stiffened fuselage concept and evaluated performance and manufacturing cost compared to conventional designs. An integral skin/stiffener concept was produced both by plate hog-out and near-net extrusion. Alloys evaluated included 7050-T7451 plate, 7050-T74511 extrusion, 6013-T6511 extrusion, and 7475-T7351 plate. Mechanical properties, structural details, and joint performance were evaluated as well as repair, static compression, and two-bay crack residual strength panels. Crack turning behavior was characterized through panel tests and improved methods for predicting crack turning were developed. Manufacturing cost was evaluated using COSTRAN. A hybrid design, made from high-speed machined extruded frames that are mechanically fastened to high-speed machined plate skin/stringer panels, was identified as the most cost-effective manufacturing solution. Recurring labor and material costs of the hybrid design are up to 61 percent less than the current technology baseline.
High performance cutting using micro-textured tools and low pressure jet coolant
NASA Astrophysics Data System (ADS)
Obikawa, Toshiyuki; Nakatsukasa, Ryuta; Hayashi, Mamoru; Ohno, Tatsumi
2018-05-01
Tool inserts with different kinds of microtexture on the flank face were fabricated by laser irradiation for promoting the heat transfer from the tool face to the coolant. In addition to the micro-textured tools, jet coolant was applied to the tool tip from the side of the flank face, but under low-pressure conditions, to make Reynolds number of coolant as high as possible in the wedge shape zone between the tool flank and machined surface. First, the effect of jet coolant on the flank wear evolution was investigated using a tool without microtexture. The jet coolant showed an excellent improvement of the tool life in machining stainless steel SUS304 at higher cutting speeds. It was found that both the flow rate and velocity of jet coolant were indispensable to high performance cutting. Next, the effect of microtexture on the flank wear evolution was investigated using jet coolant. Three types of micro grooves extended tool life largely compared to the tool without microtexture. It was found that the depth of groove was one of important parameters affecting the tool life extension. As a result, the tool life was extended by more than l00 % using the microtextured tools and jet coolant compared to machining using flood coolant and a tool without microtexture.
wACSF—Weighted atom-centered symmetry functions as descriptors in machine learning potentials
NASA Astrophysics Data System (ADS)
Gastegger, M.; Schwiedrzik, L.; Bittermann, M.; Berzsenyi, F.; Marquetand, P.
2018-06-01
We introduce weighted atom-centered symmetry functions (wACSFs) as descriptors of a chemical system's geometry for use in the prediction of chemical properties such as enthalpies or potential energies via machine learning. The wACSFs are based on conventional atom-centered symmetry functions (ACSFs) but overcome the undesirable scaling of the latter with an increasing number of different elements in a chemical system. The performance of these two descriptors is compared using them as inputs in high-dimensional neural network potentials (HDNNPs), employing the molecular structures and associated enthalpies of the 133 855 molecules containing up to five different elements reported in the QM9 database as reference data. A substantially smaller number of wACSFs than ACSFs is needed to obtain a comparable spatial resolution of the molecular structures. At the same time, this smaller set of wACSFs leads to a significantly better generalization performance in the machine learning potential than the large set of conventional ACSFs. Furthermore, we show that the intrinsic parameters of the descriptors can in principle be optimized with a genetic algorithm in a highly automated manner. For the wACSFs employed here, we find however that using a simple empirical parametrization scheme is sufficient in order to obtain HDNNPs with high accuracy.
Modeling and Analysis of High Torque Density Transverse Flux Machines for Direct-Drive Applications
NASA Astrophysics Data System (ADS)
Hasan, Iftekhar
Commercially available permanent magnet synchronous machines (PMSM) typically use rare-earth-based permanent magnets (PM). However, volatility and uncertainty associated with the supply and cost of rare-earth magnets have caused a push for increased research into the development of non-rare-earth based PM machines and reluctance machines. Compared to other PMSM topologies, the Transverse Flux Machine (TFM) is a promising candidate to get higher torque densities at low speed for direct-drive applications, using non-rare-earth based PMs. The TFMs can be designed with a very small pole pitch which allows them to attain higher force density than conventional radial flux machines (RFM) and axial flux machines (AFM). This dissertation presents the modeling, electromagnetic design, vibration analysis, and prototype development of a novel non-rare-earth based PM-TFM for a direct-drive wind turbine application. The proposed TFM addresses the issues of low power factor, cogging torque, and torque ripple during the electromagnetic design phase. An improved Magnetic Equivalent Circuit (MEC) based analytical model was developed as an alternative to the time-consuming 3D Finite Element Analysis (FEA) for faster electromagnetic analysis of the TFM. The accuracy and reliability of the MEC model were verified, both with 3D-FEA and experimental results. The improved MEC model was integrated with a Particle Swarm Optimization (PSO) algorithm to further enhance the capability of the analytical tool for performing rigorous optimization of performance-sensitive machine design parameters to extract the highest torque density for rated speed. A novel concept of integrating the rotary transformer within the proposed TFM design was explored to completely eliminate the use of magnets from the TFM. While keeping the same machine envelope, and without changing the stator or rotor cores, the primary and secondary of a rotary transformer were embedded into the double-sided TFM. The proposed structure allowed for improved flux-weakening capabilities of the TFM for wide speed operations. The electromagnetic design feature of stator pole shaping was used to address the issue of cogging torque and torque ripple in 3-phase TFM. The slant-pole tooth-face in the stator showed significant improvements in cogging torque and torque ripple performance during the 3-phase FEA analysis of the TFM. A detailed structural analysis for the proposed TFM was done prior to the prototype development to validate the structural integrity of the TFM design at rated and maximum speed operation. Vibration performance of the TFM was investigated to determine the structural performance of the TFM under resonance. The prototype for the proposed TFM was developed at the Alternative Energy Laboratory of the University of Akron. The working prototype is a testament to the feasibility of developing and implementing the novel TFM design proposed in this research. Experiments were performed to validate the 3D-FEA electromagnetic and vibration performance result.
Fuzzy support vector machine for microarray imbalanced data classification
NASA Astrophysics Data System (ADS)
Ladayya, Faroh; Purnami, Santi Wulan; Irhamah
2017-11-01
DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.
Cao, Ran; Pu, Xianjie; Du, Xinyu; Yang, Wei; Wang, Jiaona; Guo, Hengyu; Zhao, Shuyu; Yuan, Zuqing; Zhang, Chi; Li, Congju; Wang, Zhong Lin
2018-05-22
Multifunctional electronic textiles (E-textiles) with embedded electric circuits hold great application prospects for future wearable electronics. However, most E-textiles still have critical challenges, including air permeability, satisfactory washability, and mass fabrication. In this work, we fabricate a washable E-textile that addresses all of the concerns and shows its application as a self-powered triboelectric gesture textile for intelligent human-machine interfacing. Utilizing conductive carbon nanotubes (CNTs) and screen-printing technology, this kind of E-textile embraces high conductivity (0.2 kΩ/sq), high air permeability (88.2 mm/s), and can be manufactured on common fabric at large scales. Due to the advantage of the interaction between the CNTs and the fabrics, the electrode shows excellent stability under harsh mechanical deformation and even after being washed. Moreover, based on a single-electrode mode triboelectric nanogenerator and electrode pattern design, our E-textile exhibits highly sensitive touch/gesture sensing performance and has potential applications for human-machine interfacing.
NASA Astrophysics Data System (ADS)
Fei, Cheng-Wei; Bai, Guang-Chen
2014-12-01
To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.
CHARACTERIZATION OF Pro-Beam LOW VOLTAGE ELECTRON BEAM WELDING MACHINE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgardt, Paul; Pierce, Stanley W.
The purpose of this paper is to present and discuss data related to the performance of a newly acquired low voltage electron beam welding machine. The machine was made by Pro-Beam AG &Co. KGaA of Germany. This machine was recently installed at LANL in building SM -39; a companion machine was installed in the production facility. The PB machine is substantially different than the EBW machines typically used at LANL and therefore, it is important to understand its characteristics as well as possible. Our basic purpose in this paper is to present basic machine performance data and to compare thosemore » with similar results from the existing EBW machines. It is hoped that this data will provide a historical record of this machine’s characteristics as well as possibly being helpful for transferring welding processes from the old EBW machines to the PB machine or comparable machines that may be purchased in the future.« less
An iterative learning control method with application for CNC machine tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D.I.; Kim, S.
1996-01-01
A proportional, integral, and derivative (PID) type iterative learning controller is proposed for precise tracking control of industrial robots and computer numerical controller (CNC) machine tools performing repetitive tasks. The convergence of the output error by the proposed learning controller is guaranteed under a certain condition even when the system parameters are not known exactly and unknown external disturbances exist. As the proposed learning controller is repeatedly applied to the industrial robot or the CNC machine tool with the path-dependent repetitive task, the distance difference between the desired path and the actual tracked or machined path, which is one ofmore » the most significant factors in the evaluation of control performance, is progressively reduced. The experimental results demonstrate that the proposed learning controller can improve machining accuracy when the CNC machine tool performs repetitive machining tasks.« less
Optimization of temperature field of tobacco heat shrink machine
NASA Astrophysics Data System (ADS)
Yang, Xudong; Yang, Hai; Sun, Dong; Xu, Mingyang
2018-06-01
A company currently shrinking machine in the course of the film shrinkage is not compact, uneven temperature, resulting in poor quality of the shrinkage of the surface film. To solve this problem, the simulation and optimization of the temperature field are performed by using the k-epsilon turbulence model and the MRF model in fluent. The simulation results show that after the mesh screen structure is installed at the suction inlet of the centrifugal fan, the suction resistance of the fan can be increased and the eddy current intensity caused by the high-speed rotation of the fan can be improved, so that the internal temperature continuity of the heat shrinkable machine is Stronger.
Developing Lathing Parameters for PBX 9501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodrum, Randall Brock
This thesis presents the work performed on lathing PBX 9501 to gather and analyze cutting force and temperature data during the machining process. This data will be used to decrease federal-regulation-constrained machining time of the high explosive PBX 9501. The effects of machining parameters depth of cut, surface feet per minute, and inches per revolution on cutting force and cutting interface were evaluated. Cutting tools of tip radius 0.005 -inches and 0.05 -inches were tested to determine what effect the tool shape had on the machining process as well. A consistently repeatable relationship of temperature to changing depth of cutmore » and surface feet per minute is found, while only a weak dependence was found to changing inches per revolution. Results also show the relation of cutting force to depth of cut and inches per revolution, while weak dependence on SFM is found. Conclusions suggest rapid, shallow cuts optimize machining time for a billet of PBX 9501, while minimizing temperature increase and cutting force.« less
Nano Mechanical Machining Using AFM Probe
NASA Astrophysics Data System (ADS)
Mostofa, Md. Golam
Complex miniaturized components with high form accuracy will play key roles in the future development of many products, as they provide portability, disposability, lower material consumption in production, low power consumption during operation, lower sample requirements for testing, and higher heat transfer due to their very high surface-to-volume ratio. Given the high market demand for such micro and nano featured components, different manufacturing methods have been developed for their fabrication. Some of the common technologies in micro/nano fabrication are photolithography, electron beam lithography, X-ray lithography and other semiconductor processing techniques. Although these methods are capable of fabricating micro/nano structures with a resolution of less than a few nanometers, some of the shortcomings associated with these methods, such as high production costs for customized products, limited material choices, necessitate the development of other fabricating techniques. Micro/nano mechanical machining, such an atomic force microscope (AFM) probe based nano fabrication, has, therefore, been used to overcome some the major restrictions of the traditional processes. This technique removes material from the workpiece by engaging micro/nano size cutting tool (i.e. AFM probe) and is applicable on a wider range of materials compared to the photolithographic process. In spite of the unique benefits of nano mechanical machining, there are also some challenges with this technique, since the scale is reduced, such as size effects, burr formations, chip adhesions, fragility of tools and tool wear. Moreover, AFM based machining does not have any rotational movement, which makes fabrication of 3D features more difficult. Thus, vibration-assisted machining is introduced into AFM probe based nano mechanical machining to overcome the limitations associated with the conventional AFM probe based scratching method. Vibration-assisted machining reduced the cutting forces and burr formations through intermittent cutting. Combining the AFM probe based machining with vibration-assisted machining enhanced nano mechanical machining processes by improving the accuracy, productivity and surface finishes. In this study, several scratching tests are performed with a single crystal diamond AFM probe to investigate the cutting characteristics and model the ploughing cutting forces. Calibration of the probe for lateral force measurements, which is essential, is also extended through the force balance method. Furthermore, vibration-assisted machining system is developed and applied to fabricate different materials to overcome some of the limitations of the AFM probe based single point nano mechanical machining. The novelty of this study includes the application of vibration-assisted AFM probe based nano scale machining to fabricate micro/nano scale features, calibration of an AFM by considering different factors, and the investigation of the nano scale material removal process from a different perspective.
Summary of the Optics, IR, Injection, Operations, Reliability and Instrumentation Working Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wienands, U.; /SLAC; Funakoshi, Y.
2012-04-20
The facilities reported on are all in a fairly mature state of operation, as evidenced by the very detailed studies and correction schemes that all groups are working on. First- and higher-order aberrations are diagnosed and planned to be corrected. Very detailed beam measurements are done to get a global picture of the beam dynamics. More than other facilities the high-luminosity colliders are struggling with experimental background issues, mitigation of which is a permanent challenge. The working group dealt with a very wide rage of practical issues which limit performance of the machines and compared their techniques of operations andmore » their performance. We anticipate this to be a first attempt. In a future workshop in this series, we propose to attempt more fundamental comparisons of each machine, including design parameters. For example, DAPHNE and KEKB employ a finite crossing angle. The minimum value of {beta}*{sub y} attainable at KEKB seems to relate to this scheme. Effectiveness of compensation solenoids and turn-by-turn BPMs etc. should be examined in more detail. In the near future, CESR-C and VEPP-2000 will start their operation. We expect to hear important new experiences from these machines; in particular VEPP-2000 will be the first machine to have adopted round beams. At SLAC and KEK, next generation B Factories are being considered. It will be worthwhile to discuss the design issues of these machines based on the experiences of the existing factory machines.« less
Application of high-performance computing to numerical simulation of human movement
NASA Technical Reports Server (NTRS)
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Application of Classification Models to Pharyngeal High-Resolution Manometry
ERIC Educational Resources Information Center
Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.
2012-01-01
Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…
Conditional High-Order Boltzmann Machines for Supervised Relation Learning.
Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu
2017-09-01
Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.
Hello World Deep Learning in Medical Imaging.
Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George
2018-05-03
There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.
Altering the near-miss effect in slot machine gamblers.
Dixon, Mark R; Nastally, Becky L; Jackson, James E; Habib, Reza
2009-01-01
This study investigated the potential for recreational gamblers to respond as if certain types of losing slot machine outcomes were actually closer to a win than others (termed the near-miss effect). Exposure to conditional discrimination training and testing disrupted this effect for 10 of the 16 participants. These 10 participants demonstrated high percentages of conditional discrimination testing performance, and the remaining 6 participants failed the discrimination tests. The implications for a verbally based behavioral explanation of gambling are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhatre, V; Patwe, P; Dandekar, P
Purpose: Quality assurance (QA) of complex linear accelerators is critical and highly time consuming. ArcCHECK Machine QA tool is used to test geometric and delivery aspects of linear accelerator. In this study we evaluated the performance of this tool. Methods: Machine QA feature allows user to perform quality assurance tests using ArcCHECK phantom. Following tests were performed 1) Gantry Speed 2) Gantry Rotation 3) Gantry Angle 4)MLC/Collimator QA 5)Beam Profile Flatness & Symmetry. Data was collected on trueBEAM stX machine for 6 MV for a period of one year. The Gantry QA test allows to view errors in gantry angle,more » rotation & assess how accurately the gantry moves around the isocentre. The MLC/Collimator QA tool is used to analyze & locate the differences between leaf bank & jaw position of linac. The flatness & Symmetry test quantifies beam flatness & symmetry in IEC-y & x direction. The Gantry & Flatness/Symmetry test can be performed for static & dynamic delivery. Results: The Gantry speed was 3.9 deg/sec with speed maximum deviation around 0.3 deg/sec. The Gantry Isocentre for arc delivery was 0.9mm & static delivery was 0.4mm. The maximum percent positive & negative difference was found to be 1.9 % & – 0.25 % & maximum distance positive & negative diff was 0.4mm & – 0.3 mm for MLC/Collimator QA. The Flatness for Arc delivery was 1.8 % & Symmetry for Y was 0.8 % & X was 1.8 %. The Flatness for gantry 0°,270°,90° & 180° was 1.75,1.9,1.8 & 1.6% respectively & Symmetry for X & Y was 0.8,0.6% for 0°, 0.6,0.7% for 270°, 0.6,1% for 90° & 0.6,0.7% for 180°. Conclusion: ArcCHECK Machine QA is an useful tool for QA of Modern linear accelerators as it tests both geometric & delivery aspects. This is very important for VMAT, SRS & SBRT treatments.« less
Research on the tool holder mode in high speed machining
NASA Astrophysics Data System (ADS)
Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao
2018-03-01
High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.
Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.
Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar
2017-03-01
We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.
Exploring the capabilities of support vector machines in detecting silent data corruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
Exploring the capabilities of support vector machines in detecting silent data corruptions
Subasi, Omer; Di, Sheng; Bautista-Gomez, Leonardo; ...
2018-02-01
As the exascale era approaches, the increasing capacity of high-performance computing (HPC) systems with targeted power and energy budget goals introduces significant challenges in reliability. Silent data corruptions (SDCs), or silent errors, are one of the major sources that corrupt the execution results of HPC applications without being detected. Here in this paper, we explore a set of novel SDC detectors – by leveraging epsilon-insensitive support vector machine regression – to detect SDCs that occur in HPC applications. The key contributions are threefold. (1) Our exploration takes temporal, spatial, and spatiotemporal features into account and analyzes different detectors based onmore » different features. (2) We provide an in-depth study on the detection ability and performance with different parameters, and we optimize the detection range carefully. (3) Experiments with eight real-world HPC applications show that support-vector-machine-based detectors can achieve detection sensitivity (i.e., recall) up to 99% yet suffer a less than 1% false positive rate for most cases. Our detectors incur low performance overhead, 5% on average, for all benchmarks studied in this work.« less
Reverse engineering of wörner type drilling machine structure.
NASA Astrophysics Data System (ADS)
Wibowo, A.; Belly, I.; llhamsyah, R.; Indrawanto; Yuwana, Y.
2018-03-01
A product design needs to be modified based on the conditions of production facilities and existing resource capabilities without reducing the functional aspects of the product itself. This paper describes the reverse engineering process of the main structure of the wörner type drilling machine to obtain a machine structure design that can be made by resources with limited ability by using simple processes. Some structural, functional and the work mechanism analyzes have been performed to understand the function and role of each basic components. The process of dismantling of the drilling machine and measuring each of the basic components was performed to obtain sets of the geometry and size data of each component. The geometric model of each structure components and the machine assembly were built to facilitate the simulation process and machine performance analysis that refers to ISO standard of drilling machine. The tolerance stackup analysis also performed to determine the type and value of geometrical and dimensional tolerances, which could affect the ease of the components to be manufactured and assembled
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
NASA Technical Reports Server (NTRS)
1989-01-01
"Peen Plating," a NASA developed process for applying molybdenum disulfide, is the key element of Techniblast Co.'s SURFGUARD process for applying high strength solid lubricants. The process requires two machines -- one for cleaning and one for coating. The cleaning step allows the coating to be bonded directly to the substrate to provide a better "anchor." The coating machine applies a half a micron thick coating. Then, a blast gun, using various pressures to vary peening intensities for different applications, fires high velocity "media" -- peening hammers -- ranging from plastic pellets to steel shot. Techniblast was assisted by Rural Enterprises, Inc. Coating service can be performed at either Techniblast's or a customer's facility.
Tackling the x-ray cargo inspection challenge using machine learning
NASA Astrophysics Data System (ADS)
Jaccard, Nicolas; Rogers, Thomas W.; Morton, Edward J.; Griffin, Lewis D.
2016-05-01
The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding com-merce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection workflow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
NASA Astrophysics Data System (ADS)
Haag, Sebastian; Bernhardt, Henning; Rübenach, Olaf; Haverkamp, Tobias; Müller, Tobias; Zontar, Daniel; Brecher, Christian
2015-02-01
In many applications for high-power diode lasers, the production of beam-shaping and homogenizing optical systems experience rising volumes and dynamical market demands. The automation of assembly processes on flexible and reconfigurable machines can contribute to a more responsive and scalable production. The paper presents a flexible mounting device designed for the challenging assembly of side-tab based optical systems. It provides design elements for precisely referencing and fixating two optical elements in a well-defined geometric relation. Side tabs are presented to the machine allowing the application of glue and a rotating mechanism allows the attachment to the optical elements. The device can be adjusted to fit different form factors and it can be used in high-volume assembly machines. The paper shows the utilization of the device for a collimation module consisting of a fast-axis and a slow-axis collimation lens. Results regarding the repeatability and process capability of bonding side tab assemblies as well as estimates from 3D simulation for overall performance indicators achieved such as cycle time and throughput will be discussed.
NASA Astrophysics Data System (ADS)
Matras, A.; Kowalczyk, R.
2014-11-01
The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.
Multimedia systems in ultrasound image boundary detection and measurements
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin
1997-05-01
Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.
WATERLOPP V2/64: A highly parallel machine for numerical computation
NASA Astrophysics Data System (ADS)
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.
Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin
2014-10-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.
Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems
Andrade, G.; Ferreira, R.; Teodoro, George; Rocha, Leonardo; Saltz, Joel H.; Kurc, Tahsin
2015-01-01
High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales. PMID:26640423
Man-machine interactive imaging and data processing using high-speed digital mass storage
NASA Technical Reports Server (NTRS)
Alsberg, H.; Nathan, R.
1975-01-01
The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.
Laser-machined microcavities for simultaneous measurement of high-temperature and high-pressure.
Ran, Zengling; Liu, Shan; Liu, Qin; Huang, Ya; Bao, Haihong; Wang, Yanjun; Luo, Shucheng; Yang, Huiqin; Rao, Yunjiang
2014-08-07
Laser-machined microcavities for simultaneous measurement of high-temperature and high-pressure are demonstrated. These two cascaded microcavities are an air cavity and a composite cavity including a section of fiber and an air cavity. They are both placed into a pressure chamber inside a furnace to perform simultaneous pressure and high-temperature tests. The thermal and pressure coefficients of the short air cavity are ~0.0779 nm/°C and ~1.14 nm/MPa, respectively. The thermal and pressure coefficients of the composite cavity are ~32.3 nm/°C and ~24.4 nm/MPa, respectively. The sensor could be used to separate temperature and pressure due to their different thermal and pressure coefficients. The excellent feature of such a sensor head is that it can withstand high temperatures of up to 400 °C and achieve precise measurement of high-pressure under high temperature conditions.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO
2017-01-01
Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:27908398
GREENE, CASEY S.; TAN, JIE; UNG, MATTHEW; MOORE, JASON H.; CHENG, CHAO
2017-01-01
Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the “big data” era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both “machine learning” algorithms as well as “unsupervised” and “supervised” examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. PMID:24799088
NASA Astrophysics Data System (ADS)
Zhang, Ruiyun; Xu, Shisen; Cheng, Jian; Wang, Hongjian; Ren, Yongqiang
2017-07-01
Low-cost and high-performance matrix materials used in mass production of molten carbonate fuel cell (MCFC) were prepared by automatic casting machine with α-LiAlO2 powder material synthesized by gel-solid method, and distilled water as solvent. The single cell was assembled for generating test, and the good performance of the matrix was verified. The paper analyzed the factors affecting aqueous tape casting matrix preparation, such as solvent content, dispersant content, milling time, blade height and casting machine running speed, providing a solid basis for the mass production of large area environment-friendly matrix used in molten carbonate fuel cell.
Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao
2014-12-01
Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia. © 2014 Wiley Periodicals, Inc.
Lysine acetylation sites prediction using an ensemble of support vector machine classifiers.
Xu, Yan; Wang, Xiao-Bo; Ding, Jun; Wu, Ling-Yun; Deng, Nai-Yang
2010-05-07
Lysine acetylation is an essentially reversible and high regulated post-translational modification which regulates diverse protein properties. Experimental identification of acetylation sites is laborious and expensive. Hence, there is significant interest in the development of computational methods for reliable prediction of acetylation sites from amino acid sequences. In this paper we use an ensemble of support vector machine classifiers to perform this work. The experimentally determined acetylation lysine sites are extracted from Swiss-Prot database and scientific literatures. Experiment results show that an ensemble of support vector machine classifiers outperforms single support vector machine classifier and other computational methods such as PAIL and LysAcet on the problem of predicting acetylation lysine sites. The resulting method has been implemented in EnsemblePail, a web server for lysine acetylation sites prediction available at http://www.aporc.org/EnsemblePail/. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
A Review on Parametric Analysis of Magnetic Abrasive Machining Process
NASA Astrophysics Data System (ADS)
Khattri, Krishna; Choudhary, Gulshan; Bhuyan, B. K.; Selokar, Ashish
2018-03-01
The magnetic abrasive machining (MAM) process is a highly developed unconventional machining process. It is frequently used in manufacturing industries for nanometer range surface finishing of workpiece with the help of Magnetic abrasive particles (MAPs) and magnetic force applied in the machining zone. It is precise and faster than conventional methods and able to produce defect free finished components. This paper provides a comprehensive review on the recent advancement of MAM process carried out by different researcher till date. The effect of different input parameters such as rotational speed of electromagnet, voltage, magnetic flux density, abrasive particles size and working gap on the performances of Material Removal Rate (MRR) and surface roughness (Ra) have been discussed. On the basis of review, it is observed that the rotational speed of electromagnet, voltage and mesh size of abrasive particles have significant impact on MAM process.
Residual Error Based Anomaly Detection Using Auto-Encoder in SMD Machine Sound.
Oh, Dong Yul; Yun, Il Dong
2018-04-24
Detecting an anomaly or an abnormal situation from given noise is highly useful in an environment where constantly verifying and monitoring a machine is required. As deep learning algorithms are further developed, current studies have focused on this problem. However, there are too many variables to define anomalies, and the human annotation for a large collection of abnormal data labeled at the class-level is very labor-intensive. In this paper, we propose to detect abnormal operation sounds or outliers in a very complex machine along with reducing the data-driven annotation cost. The architecture of the proposed model is based on an auto-encoder, and it uses the residual error, which stands for its reconstruction quality, to identify the anomaly. We assess our model using Surface-Mounted Device (SMD) machine sound, which is very complex, as experimental data, and state-of-the-art performance is successfully achieved for anomaly detection.
Design of control system for optical fiber drawing machine driven by double motor
NASA Astrophysics Data System (ADS)
Yu, Yue Chen; Bo, Yu Ming; Wang, Jun
2018-01-01
Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.
Collective Effects in a Diffraction Limited Storage Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagaoka, Ryutaro; Bane, Karl L.F.
Our paper gives an overview of collective effects that are likely to appear and possibly limit the performance in a diffraction-limited storage ring (DLSR) that stores a high-intensity ultra-low-emittance beam. Beam instabilities and other intensity-dependent effects that may significantly impact the machine performance are covered. The latter include beam-induced machine heating, Touschek scattering, intra-beam scattering, as well as incoherent tune shifts. The general trend that the efforts to achieve ultra-low emittance result in increasing the machine coupling impedance and the beam sensitivity to instability is reviewed. The nature of coupling impedance in a DLSR is described, followed by a seriesmore » of potentially dangerous beam instabilities driven by the former, such as resistive-wall, TMCI (transverse mode coupling instability), head-tail and microwave instabilities. Additionally, beam-ion and CSR (coherent synchrotron radiation) instabilities are also treated. Means to fight against collective effects such as lengthening of the bunch with passive harmonic cavities and bunch-by-bunch transverse feedback are introduced. Numerical codes developed and used to evaluate the machine coupling impedance, as well as to simulate beam instability using the former as inputs are described.« less
Collective Effects in a Diffraction Limited Storage Ring
Nagaoka, Ryutaro; Bane, Karl L.F.
2015-10-20
Our paper gives an overview of collective effects that are likely to appear and possibly limit the performance in a diffraction-limited storage ring (DLSR) that stores a high-intensity ultra-low-emittance beam. Beam instabilities and other intensity-dependent effects that may significantly impact the machine performance are covered. The latter include beam-induced machine heating, Touschek scattering, intra-beam scattering, as well as incoherent tune shifts. The general trend that the efforts to achieve ultra-low emittance result in increasing the machine coupling impedance and the beam sensitivity to instability is reviewed. The nature of coupling impedance in a DLSR is described, followed by a seriesmore » of potentially dangerous beam instabilities driven by the former, such as resistive-wall, TMCI (transverse mode coupling instability), head-tail and microwave instabilities. Additionally, beam-ion and CSR (coherent synchrotron radiation) instabilities are also treated. Means to fight against collective effects such as lengthening of the bunch with passive harmonic cavities and bunch-by-bunch transverse feedback are introduced. Numerical codes developed and used to evaluate the machine coupling impedance, as well as to simulate beam instability using the former as inputs are described.« less
NASA Astrophysics Data System (ADS)
Tresser, Shachar; Dolev, Amit; Bucher, Izhak
2018-02-01
High-speed machinery is often designed to pass several "critical speeds", where vibration levels can be very high. To reduce vibrations, rotors usually undergo a mass balancing process, where the machine is rotated at its full speed range, during which the dynamic response near critical speeds can be measured. High sensitivity, which is required for a successful balancing process, is achieved near the critical speeds, where a single deflection mode shape becomes dominant, and is excited by the projection of the imbalance on it. The requirement to rotate the machine at high speeds is an obstacle in many cases, where it is impossible to perform measurements at high speeds, due to harsh conditions such as high temperatures and inaccessibility (e.g., jet engines). This paper proposes a novel balancing method of flexible rotors, which does not require the machine to be rotated at high speeds. With this method, the rotor is spun at low speeds, while subjecting it to a set of externally controlled forces. The external forces comprise a set of tuned, response dependent, parametric excitations, and nonlinear stiffness terms. The parametric excitation can isolate any desired mode, while keeping the response directly linked to the imbalance. A software controlled nonlinear stiffness term limits the response, hence preventing the rotor to become unstable. These forces warrant sufficient sensitivity required to detect the projection of the imbalance on any desired mode without rotating the machine at high speeds. Analytical, numerical and experimental results are shown to validate and demonstrate the method.
Venkatesh, Santosh S; Levenback, Benjamin J; Sultan, Laith R; Bouzghar, Ghizlane; Sehgal, Chandra M
2015-12-01
The goal of this study was to devise a machine learning methodology as a viable low-cost alternative to a second reader to help augment physicians' interpretations of breast ultrasound images in differentiating benign and malignant masses. Two independent feature sets consisting of visual features based on a radiologist's interpretation of images and computer-extracted features when used as first and second readers and combined by adaptive boosting (AdaBoost) and a pruning classifier resulted in a very high level of diagnostic performance (area under the receiver operating characteristic curve = 0.98) at a cost of pruning a fraction (20%) of the cases for further evaluation by independent methods. AdaBoost also improved the diagnostic performance of the individual human observers and increased the agreement between their analyses. Pairing AdaBoost with selective pruning is a principled methodology for achieving high diagnostic performance without the added cost of an additional reader for differentiating solid breast masses by ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.
Kim, Lok-Won
2018-05-01
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).
Interpreting linear support vector machine models with heat map molecule coloring
2011-01-01
Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor. PMID:21439031
Machine learning approaches to the social determinants of health in the health and retirement study.
Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David
2018-04-01
Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.
The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL
NASA Astrophysics Data System (ADS)
Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.
2017-03-01
It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.
Lighting Studies for Fuelling Machine Deployed Visual Inspection Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoots, Carl; Griffith, George
2015-04-01
Under subcontract to James Fisher Nuclear, Ltd., INL has been reviewing advanced vision systems for inspection of graphite in high radiation, high temperature, and high pressure environments. INL has performed calculations and proof-of-principle measurements of optics and lighting techniques to be considered for visual inspection of graphite fuel channels in AGR reactors in UK.
A Machine LearningFramework to Forecast Wave Conditions
NASA Astrophysics Data System (ADS)
Zhang, Y.; James, S. C.; O'Donncha, F.
2017-12-01
Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.
Method and apparatus for characterizing and enhancing the functional performance of machine tools
Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David
2013-04-30
Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.
NASA Astrophysics Data System (ADS)
Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping
2018-02-01
In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.
2013-01-01
Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620
NASA Astrophysics Data System (ADS)
Zhou, Qianxiang; Liu, Zhongqi
With the development of manned space technology, space rendezvous and docking (RVD) technology will play a more and more important role. The astronauts’ participation in a final close period of man-machine combination control is an important way of RVD technology. Spacecraft RVD control involves control problem of a total of 12 degrees of freedom (location) and attitude which it relative to the inertial space the orbit. Therefore, in order to reduce the astronauts’ operation load and reduce the security requirements to the ground station and achieve an optimal performance of the whole man-machine system, it is need to study how to design the number of control parameters of astronaut or aircraft automatic control system. In this study, with the laboratory conditions on the ground, a method was put forward to develop an experimental system in which the performance evaluation of spaceship RVD integration control by man and machine could be completed. After the RVD precision requirements were determined, 26 male volunteers aged 20-40 took part in the performance evaluation experiments. The RVD integration control success rates and total thruster ignition time were chosen as evaluation indices. Results show that if less than three RVD parameters control tasks were finished by subject and the rest of parameters control task completed by automation, the RVD success rate would be larger than eighty-eight percent and the fuel consumption would be optimized. In addition, there were two subjects who finished the whole six RVD parameters control tasks by enough train. In conclusion, if the astronauts' role should be integrated into the RVD control, it was suitable for them to finish the heading, pitch and roll control in order to assure the man-machine system high performance. If astronauts were needed to finish all parameter control, two points should be taken into consideration, one was enough fuel and another was enough long operation time.
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Hunker, Keith R.; Hartwig, Jason; Brown, Gerald V.
2017-01-01
The NASA Glenn Research Center (GRC) has been developing the high efficiency and high-power density superconducting (SC) electric machines in full support of electrified aircraft propulsion (EAP) systems for a future electric aircraft. A SC coil test rig has been designed and built to perform static and AC measurements on BSCCO, (RE)BCO, and YBCO high temperature superconducting (HTS) wire and coils at liquid nitrogen (LN2) temperature. In this paper, DC measurements on five SC coil configurations of various geometry in zero external magnetic field are measured to develop good measurement technique and to determine the critical current (Ic) and the sharpness (n value) of the super-to-normal transition. Also, standard procedures for coil design, fabrication, coil mounting, micro-volt measurement, cryogenic testing, current control, and data acquisition technique were established. Experimentally measured critical currents are compared with theoretical predicted values based on an electric-field criterion (Ec). Data here are essential to quantify the SC electric machine operation limits where the SC begins to exhibit non-zero resistance. All test data will be utilized to assess the feasibility of using HTS coils for the fully superconducting AC electric machine development for an aircraft electric propulsion system.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian
2017-10-16
Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.
A 3D-printed device for polymer nanoimprint lithography
NASA Astrophysics Data System (ADS)
Caño-García, Manuel; Geday, Morten A.; Gil-Valverde, Manuel; Megías Zarco, Antonio; Otón, José M.; Quintana, Xabier
2018-02-01
Nanoimprint lithography (NIL) is an imprinting technique which has experienced an increasing popularity due to its versatility in fabrication processes. Commercial NIL machines are readily available achieving high quality results; however, these machines involve a relatively high investment. Hence, small laboratories often choose to perform NIL copies in a more rudimentary and cheaper way. A new simple system is presented in this document. It is based on two devices which can be made in-house in plastic by using a 3D printer or in aluminum. Thus, the overall manufacturing complexity is vastly reduced. The presented system includes pressure control and potentially temperature control. Replicas have been made using a sawtooth grating master with a pitch around half micrometre. High quality patterns with low density of imperfections have been achieved in 2.25 cm2 surfaces. The material chosen for the negative intermediary mould is PDMS. Tests of the imprint have been performed using the commercial hybrid polymer Ormostamp®.
Assessing the Performance of a Machine Learning Algorithm in Identifying Bubbles in Dust Emission
NASA Astrophysics Data System (ADS)
Xu, Duo; Offner, Stella S. R.
2017-12-01
Stellar feedback created by radiation and winds from massive stars plays a significant role in both physical and chemical evolution of molecular clouds. This energy and momentum leaves an identifiable signature (“bubbles”) that affects the dynamics and structure of the cloud. Most bubble searches are performed “by eye,” which is usually time-consuming, subjective, and difficult to calibrate. Automatic classifications based on machine learning make it possible to perform systematic, quantifiable, and repeatable searches for bubbles. We employ a previously developed machine learning algorithm, Brut, and quantitatively evaluate its performance in identifying bubbles using synthetic dust observations. We adopt magnetohydrodynamics simulations, which model stellar winds launching within turbulent molecular clouds, as an input to generate synthetic images. We use a publicly available three-dimensional dust continuum Monte Carlo radiative transfer code, HYPERION, to generate synthetic images of bubbles in three Spitzer bands (4.5, 8, and 24 μm). We designate half of our synthetic bubbles as a training set, which we use to train Brut along with citizen-science data from the Milky Way Project (MWP). We then assess Brut’s accuracy using the remaining synthetic observations. We find that Brut’s performance after retraining increases significantly, and it is able to identify yellow bubbles, which are likely associated with B-type stars. Brut continues to perform well on previously identified high-score bubbles, and over 10% of the MWP bubbles are reclassified as high-confidence bubbles, which were previously marginal or ambiguous detections in the MWP data. We also investigate the influence of the size of the training set, dust model, evolutionary stage, and background noise on bubble identification.
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
Machinability of IPS Empress 2 framework ceramic.
Schmidt, C; Weigl, P
2000-01-01
Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.
ATST telescope mount: telescope of machine tool
NASA Astrophysics Data System (ADS)
Jeffers, Paul; Stolz, Günter; Bonomi, Giovanni; Dreyer, Oliver; Kärcher, Hans
2012-09-01
The Advanced Technology Solar Telescope (ATST) will be the largest solar telescope in the world, and will be able to provide the sharpest views ever taken of the solar surface. The telescope has a 4m aperture primary mirror, however due to the off axis nature of the optical layout, the telescope mount has proportions similar to an 8 meter class telescope. The technology normally used in this class of telescope is well understood in the telescope community and has been successfully implemented in numerous projects. The world of large machine tools has developed in a separate realm with similar levels of performance requirement but different boundary conditions. In addition the competitive nature of private industry has encouraged development and usage of more cost effective solutions both in initial capital cost and thru-life operating cost. Telescope mounts move relatively slowly with requirements for high stability under external environmental influences such as wind buffeting. Large machine tools operate under high speed requirements coupled with high application of force through the machine but with little or no external environmental influences. The benefits of these parallel development paths and the ATST system requirements are being combined in the ATST Telescope Mount Assembly (TMA). The process of balancing the system requirements with new technologies is based on the experience of the ATST project team, Ingersoll Machine Tools who are the main contractor for the TMA and MT Mechatronics who are their design subcontractors. This paper highlights a number of these proven technologies from the commercially driven machine tool world that are being introduced to the TMA design. Also the challenges of integrating and ensuring that the differences in application requirements are accounted for in the design are discussed.
Xie, X S; Qi, C; Du, X Y; Shi, W W; Zhang, M
2016-02-20
To investigate the features of hand-transmitted vibration of common vibration tools in the workplace for automobile casting and assembly. From September to October, 2014, measurement and spectral analysis were performed for 16 typical hand tools(including percussion drill, pneumatic wrench, grinding machine, internal grinder, and arc welding machine) in 6 workplaces for automobile casting and assembly according to ISO 5349-1-2001 Mechanical vibration-Measurement and evaluation of human exposure to hand-transmitted vibration-part 1: General requirements and ISO 5349-2-2001 Mechanical vibration-Measurement and evaluation of human exposure to hand-transmitted vibration-Part 2: Practical guidance for measurement in the workplace. The vibration acceleration waveforms of shearing machine, arc welding machine, and pneumatic wrench were mainly impact wave and random wave, while those of internal grinder, angle grinder, percussion drill, and grinding machine were mainly long-and short-period waves. The daily exposure duration to vibration of electric wrench, pneumatic wrench, shearing machine, percussion drill, and internal grinder was about 150 minutes, while that of plasma cutting machine, angle grinder, grinding machine, bench grinder, and arc welding machine was about 400 minutes. The range of vibration total value(ahv) was as follows: pneumatic wrench 0.30~11.04 m/s(2), grinding wheel 1.61~8.97 m/s(2), internal grinder 1.46~8.70 m/s(2), percussion drill 11.10~14.50 m/s(2), and arc welding machine 0.21~2.18 m/s(2). The workers engaged in cleaning had the longest daily exposure duration to vibration, and the effective value of 8-hour energy-equivalent frequency-weighted acceleration for them[A(8)] was 8.03 m/s(2), while this value for workers engaged in assembly was 4.78 m/s(2). The frequency spectrogram with an 1/3-time frequency interval showed that grinding machine, angle grinder, and percussion drill had a high vibration acceleration, and the vibration limit curve was recommended for those with a frequency higher than 400 min/d. The workers who are engaged in cleaning, grinding, and a few positions of assembly and use grinding machine, angle grinder, internal grinder, and percussion drill are exposed to vibrations with a high vibration acceleration and at a high position of the frequency spectrum. The hand-transmitted vibration in the positions of cutting, polishing, and cleaning in automobile casting has great harm, and the harm caused by pneumatic wrench in automobile assembly should be taken seriously.
NASA Astrophysics Data System (ADS)
Feng, Jianjun; Li, Chengzhe; Wu, Zhi
2017-08-01
As an important part of the valve opening and closing controller in engine, camshaft has high machining accuracy requirement in designing. Taking the high-speed camshaft grinder spindle system as the research object and the spindle system performance as the optimizing target, this paper firstly uses Solidworks to establish the three-dimensional finite element model (FEM) of spindle system, then conducts static analysis and the modal analysis by applying the established FEM in ANSYS Workbench, and finally uses the design optimization function of the ANSYS Workbench to optimize the structure parameter in the spindle system. The study results prove that the design of the spindle system fully meets the production requirements, and the performance of the optimized spindle system is promoted. Besides, this paper provides an analysis and optimization method for other grinder spindle systems.
NASA Astrophysics Data System (ADS)
Dasgupta, S.; Mukherjee, S.
2016-09-01
One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and optimization of machining parameters based on Taguchi design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. Taguchi's orthogonal array analysis revealed the optimal combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan
This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less
The Portals 4.0 network programming interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin
2012-11-01
This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
CP Violation and the Future of Flavor Physics
NASA Astrophysics Data System (ADS)
Kiesling, Christian
2009-12-01
With the nearing completion of the first-generation experiments at asymmetric e+e- colliders running at the Υ(4S) resonance ("B-Factories") a new era of high luminosity machines is at the horizon. We report here on the plans at KEK in Japan to upgrade the KEKB machine ("SuperKEKB") with the goal of achieving an instantaneous luminosity exceeding 8×1035 cm-2 s-1, which is almost two orders of magnitude higher than KEKB. Together with the machine, the Belle detector will be upgraded as well ("Belle-II"), with significant improvements to increase its background tolerance as well as improving its physics performance. The new generation of experiments is scheduled to take first data in the year 2013.
Highly parallel sparse Cholesky factorization
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.
One method for life time estimation of a bucket wheel machine for coal moving
NASA Astrophysics Data System (ADS)
Vîlceanu, Fl; Iancu, C.
2016-08-01
Rehabilitation of outdated equipment with lifetime expired, or in the ultimate life period, together with high cost investments for their replacement, makes rational the efforts made to extend their life. Rehabilitation involves checking operational safety based on relevant expertise of metal structures supporting effective resistance and assessing the residual lifetime. The bucket wheel machine for coal constitute basic machine within deposits of coal of power plants. The estimate of remaining life can be done by checking the loading on the most stressed subassembly by Finite Element Analysis on a welding detail. The paper presents step-by-step the method of calculus applied in order to establishing the residual lifetime of a bucket wheel machine for coal moving using non-destructive methods of study (fatigue cracking analysis + FEA). In order to establish the actual state of machine and areas subject to study, was done FEA of this mining equipment, performed on the geometric model of mechanical analyzed structures, with powerful CAD/FEA programs. By applying the method it can be calculated residual lifetime, by extending the results from the most stressed area of the equipment to the entire machine, and thus saving time and money from expensive replacements.
Geometry and surface damage in micro electrical discharge machining of micro-holes
NASA Astrophysics Data System (ADS)
Ekmekci, Bülent; Sayar, Atakan; Tecelli Öpöz, Tahsin; Erden, Abdulkadir
2009-10-01
Geometry and subsurface damage of blind micro-holes produced by micro electrical discharge machining (micro-EDM) is investigated experimentally to explore the relational dependence with respect to pulse energy. For this purpose, micro-holes are machined with various pulse energies on plastic mold steel samples using a tungsten carbide tool electrode and a hydrocarbon-based dielectric liquid. Variations in the micro-hole geometry, micro-hole depth and over-cut in micro-hole diameter are measured. Then, unconventional etching agents are applied on the cross sections to examine micro structural alterations within the substrate. It is observed that the heat-damaged segment is composed of three distinctive layers, which have relatively high thicknesses and vary noticeably with respect to the drilling depth. Crack formation is identified on some sections of the micro-holes even by utilizing low pulse energies during machining. It is concluded that the cracking mechanism is different from cracks encountered on the surfaces when machining is performed by using the conventional EDM process. Moreover, an electrically conductive bridge between work material and debris particles is possible at the end tip during machining which leads to electric discharges between the piled segments of debris particles and the tool electrode during discharging.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Performance Analysis of Abrasive Waterjet Machining Process at Low Pressure
NASA Astrophysics Data System (ADS)
Murugan, M.; Gebremariam, MA; Hamedon, Z.; Azhari, A.
2018-03-01
Normally, a commercial waterjet cutting machine can generate water pressure up to 600 MPa. This range of pressure is used to machine a wide variety of materials. Hence, the price of waterjet cutting machine is expensive. Therefore, there is a need to develop a low cost waterjet machine in order to make the technology more accessible for the masses. Due to its low cost, such machines may only be able to generate water pressure at a much reduced rate. The present study attempts to investigate the performance of abrasive water jet machining process at low cutting pressure using self-developed low cost waterjet machine. It aims to study the feasibility of machining various materials at low pressure which later can aid in further development of an effective low cost water jet machine. A total of three different materials were machined at a low pressure of 34 MPa. The materials are mild steel, aluminium alloy 6061 and plastics Delrin®. Furthermore, a traverse rate was varied between 1 to 3 mm/min. The study on cutting performance at low pressure for different materials was conducted in terms of depth penetration, kerf taper ratio and surface roughness. It was found that all samples were able to be machined at low cutting pressure with varied qualities. Also, the depth of penetration decreases with an increase in the traverse rate. Meanwhile, the surface roughness and kerf taper ratio increase with an increase in the traverse rate. It can be concluded that a low cost waterjet machine with a much reduced rate of water pressure can be successfully used for machining certain materials with acceptable qualities.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics
NASA Astrophysics Data System (ADS)
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
sw-SVM: sensor weighting support vector machines for EEG-based brain-computer interfaces.
Jrad, N; Congedo, M; Phlypo, R; Rousseau, S; Flamary, R; Yger, F; Rakotomamonjy, A
2011-10-01
In many machine learning applications, like brain-computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.
Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.
Yu, Tao; Cai, Weiwei; Liu, Yingzheng
2018-04-01
Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.
Modeling of beam-induced damage of the LHC tertiary collimators
NASA Astrophysics Data System (ADS)
Quaranta, E.; Bertarelli, A.; Bruce, R.; Carra, F.; Cerutti, F.; Lechner, A.; Redaelli, S.; Skordis, E.; Gradassi, P.
2017-09-01
Modern hadron machines with high beam intensity may suffer from material damage in the case of large beam losses and even beam-intercepting devices, such as collimators, can be harmed. A systematic method to evaluate thresholds of damage owing to the impact of high energy particles is therefore crucial for safe operation and for predicting possible limitations in the overall machine performance. For this, a three-step simulation approach is presented, based on tracking simulations followed by calculations of energy deposited in the impacted material and hydrodynamic simulations to predict the thermomechanical effect of the impact. This approach is applied to metallic collimators at the CERN Large Hadron Collider (LHC), which in standard operation intercept halo protons, but risk to be damaged in the case of extraction kicker malfunction. In particular, tertiary collimators protect the aperture bottlenecks, their settings constrain the reach in β* and hence the achievable luminosity at the LHC experiments. Our calculated damage levels provide a very important input on how close to the beam these collimators can be operated without risk of damage. The results of this approach have been used already to push further the performance of the present machine. The risk of damage is even higher in the upgraded high-luminosity LHC with higher beam intensity, for which we quantify existing margins before equipment damage for the proposed baseline settings.
Imbalance aware lithography hotspot detection: a deep learning approach
NASA Astrophysics Data System (ADS)
Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei
2017-07-01
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandyopadhyay, B.P.
1997-02-01
The application of Electrolytic In-Process Dressing (ELID) for highly efficient and stable grinding of ceramic parts is discussed. This research was performed at the Institute of Physical and Chemical Research (RIKEN), Tokyo, Japan, June 1995 through August 1995. Experiments were conducted using a vertical machining center. The silicon nitride work material, of Japanese manufacture and supplied in the form of a rectangular block, was clamped to a vice which was firmly fixed on the base of a strain gage dynamometer. The dynamometer was clamped on the machining center table. Reciprocating grinding was performed with a flat-faced diamond grinding wheel. Themore » output from the dynamometer was recorded with a data acquisition system and the normal component of the force was monitored. Experiments were carried out under various cutting conditions, different ELID conditions, and various grinding wheel bonds types. Rough grinding wheels of grit sizes {number_sign}170 and {number_sign}140 were used in the experiments. Compared to conventional grinding, there was a significant reduction in grinding force with ELID grinding. Therefore, ELID grinding can be recommended for high material removal rate grinding, low rigidity machines, and low rigidity workpieces. Compared to normal grinding, a reduction in grinding ratio was observed when ELID grinding was performed. A negative aspect of the process, this reduced G-ratio derives from bond erosion and can be improved somewhat by adjustments in the ELID current. The results of this investigation are discussed in detail in this report.« less
Ojukwu, Chidiebele Petronilla; Anyanwu, Godson Emeka; Nwabueze, Augustine Chijindu; Anekwu, Emelie Morris; Chukwu, Sylvester Caesar
2017-01-01
Milling machine operators perform physically demanding tasks that can lead to work related musculoskeletal disorders (WRMSDs), but literature on WRMSDs among milling machine operators is scarce. Knowledge of prevalence and risk factors of WRMSDs can be an appropriate base for planning and implementing ergonomics intervention programs in the workplace. This study aimed to determine the prevalence, pattern and associated factors of WRMSDs among commercial milling machine operators in Enugu, Nigeria. This cross-sectional survey involved 148 commercial milling machine operators (74 hand-operated milling machine operators (HOMMO) and 74 electrically-operated milling machine operators (EOMMO)), within the age range of 18-65 years, who were conveniently selected from four markets in Enugu, Nigeria. A standard Nordic questionnaire was used to assess the prevalence of WRMSDs among the participants. Data were summarized using descriptive statistics. There was a significant difference (p = 0.001) related to prevalence of WRMSDs between HOMMOs (77%) and EOMMOs (50%). All body parts were affected in both groups and shoulders (85.1%) and lower back (46%) had the highest percentage of prevalence. Working in awkward and same postures, working with injury, poor workplace design, repetition of tasks, vibratory working equipments, reduced rest, high job demand and heavy lifting were significantly associated with the prevalence of WRMSDs. WRMSDs are prevalent among commercial milling machine operators with higher occurrence in HOMMOs. Ergonomic interventions, including the re-design of milling machines and appropriate work posture education of machine operators are recommended in the milling industry.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia. Office of Vocational Education.
This module on the knife machine, one in a series dealing with industrial sewing machines, their attachments, and operation, covers one topic: performing special operations on the knife machine (a single needle or multi-needle machine which sews and cuts at the same time). These components are provided: an introduction, directions, an objective,…
Electrical Discharge Machining (EDM) Gun Barrel Bore and Rifling Feasibility Study
1974-09-01
11 I + | , + + in cri es sss asa f^piis aisa -^ nro^ HH^ S I I + VD 1X> ^3 Wfl ^mvo ^00...and high erosion rates encountered in high performance gun designs such as the GAU-7/A DD , :°N RM73 1473 EDITION OF 1 NOV 65 IS OBSOLETE
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)
1975-01-01
The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.
Task Assignment Heuristics for Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)
2001-01-01
CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.
ERIC Educational Resources Information Center
Stadt, Ronald; And Others
This catalog provides performance objectives, tasks, standards, and performance guides associated with current occupational information relating to the job content of machinists, specifically tool grinder operators, production lathe operators, and production screw machine operators. The catalog is comprised of 262 performance objectives, tool and…
Comparison and combination of several MeSH indexing approaches
Yepes, Antonio Jose Jimeno; Mork, James G.; Demner-Fushman, Dina; Aronson, Alan R.
2013-01-01
MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI’s performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings. PMID:24551371
Comparison and combination of several MeSH indexing approaches.
Yepes, Antonio Jose Jimeno; Mork, James G; Demner-Fushman, Dina; Aronson, Alan R
2013-01-01
MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI's performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.
Almost every computer architect dreams of achieving high system performance with low implementation costs. A multigauge machine can reconfigure its data-path width, provide parallelism, achieve better resource utilization, and sometimes can trade computational precision for increased speed. A simple experimental method is used here to capture the main characteristics of multigauging. The measurements indicate evidence of near-optimal speedups. Adapting these ideas in designing parallel processors incurs low costs and provides flexibility. Several operational aspects of designing a multigauge machine are discussed as well. Thus, this research reports the technical, economical, and operational feasibility studies of multigauging.
Magnet reliability in the Fermilab Main Injector and implications for the ILC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartaglia, M.A.; Blowers, J.; Capista, D.
2007-08-01
The International Linear Collider reference design requires over 13000 magnets, of approximately 135 styles, which must operate with very high reliability. The Fermilab Main Injector represents a modern machine with many conventional magnet styles, each of significant quantity, that has now accumulated many hundreds of magnet-years of operation. We review here the performance of the magnets built for this machine, assess their reliability and categorize the failure modes, and discuss implications for reliability of similar magnet styles expected to be used at the ILC.
Health Informatics via Machine Learning for the Clinical Management of Patients.
Clifton, D A; Niehaus, K E; Charlton, P; Colopy, G W
2015-08-13
To review how health informatics systems based on machine learning methods have impacted the clinical management of patients, by affecting clinical practice. We reviewed literature from 2010-2015 from databases such as Pubmed, IEEE xplore, and INSPEC, in which methods based on machine learning are likely to be reported. We bring together a broad body of literature, aiming to identify those leading examples of health informatics that have advanced the methodology of machine learning. While individual methods may have further examples that might be added, we have chosen some of the most representative, informative exemplars in each case. Our survey highlights that, while much research is taking place in this high-profile field, examples of those that affect the clinical management of patients are seldom found. We show that substantial progress is being made in terms of methodology, often by data scientists working in close collaboration with clinical groups. Health informatics systems based on machine learning are in their infancy and the translation of such systems into clinical management has yet to be performed at scale.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
FSW of Aluminum Tailor Welded Blanks across Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hovanski, Yuri; Upadhyay, Piyush; Carlson, Blair
2015-02-16
Development and characterization of friction stir welded aluminum tailor welded blanks was successfully carried out on three separate machine platforms. Each was a commercially available, gantry style, multi-axis machine designed specifically for friction stir welding. Weld parameters were developed to support high volume production of dissimilar thickness aluminum tailor welded blanks at speeds of 3 m/min and greater. Parameters originally developed on an ultra-high stiffness servo driven machine where first transferred to a high stiffness servo-hydraulic friction stir welding machine, and subsequently transferred to a purpose built machine designed to accommodate thin sheet aluminum welding. The inherent beam stiffness, bearingmore » compliance, and control system for each machine were distinctly unique, which posed specific challenges in transferring welding parameters across machine platforms. This work documents the challenges imposed by successfully transferring weld parameters from machine to machine, produced from different manufacturers and with unique control systems and interfaces.« less
Induced activation studies for the LHC upgrade to High Luminosity LHC
NASA Astrophysics Data System (ADS)
Adorisio, C.; Roesler, S.
2018-06-01
The Large Hadron Collider (LHC) will be upgraded in 2019/2020 to increase its luminosity (rate of collisions) by a factor of five beyond its design value and the integrated luminosity by a factor ten, in order to maintain scientific progress and exploit its full capacity. The novel machine configuration, called High Luminosity LHC (HL-LHC), will increase consequently the level of activation of its components. The evaluation of the radiological impact of the HL-LHC operation in the Long Straight Sections of the Insertion Region 1 (ATLAS) and Insertion Region 5 (CMS) is presented. Using the Monte Carlo code FLUKA, ambient dose equivalent rate estimations have been performed on the basis of two announced operating scenarios and using the latest available machine layout. The HL-LHC project requires new technical infrastructure with caverns and 300 m long tunnels along the Insertion Regions 1 and 5. The new underground service galleries will be accessible during the operation of the accelerator machine. The radiological risk assessment for the Civil Engineering work foreseen to start excavating the new galleries in the next LHC Long Shutdown and the radiological impact of the machine operation will be discussed.
Flexible Conformable Clamps for a Machining Cell with Applications to Turbine Blade Machining.
1983-05-01
PERIOD COVERED * FLEXIBLE CONFORMABLE CLAMPS FOR A MACHINING CELL Interim WITH APPLICATIONS TO TURBINE BLADE MACHINING 6. PERFORMING ORG. REPORT NUMBER...7. AuTmbR(s) 6. CONTRACT OR GRANT NUMBER(a) Eiki Kurokawa 3. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELE%4NTPROJECT. TASK Carnegie-Mellon...University AREA a WORK UhIT NUMBERS The Robotics Institute Pittsburgh, PA. 15213 II. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE May 1983. 13
NASA Astrophysics Data System (ADS)
Czettl, C.; Pohler, M.
2016-03-01
Increasing demands on material properties of iron based work piece materials, e.g. for the turbine industry, complicate the machining process and reduce the lifetime of the cutting tools. Therefore, improved tool solutions, adapted to the requirements of the desired application have to be developed. Especially, the interplay of macro- and micro geometry, substrate material, coating and post treatment processes is crucial for the durability of modern high performance tool solutions. Improved and novel analytical methods allow a detailed understanding of material properties responsible for the wear behaviour of the tools. Those support the knowledge based development of tailored cutting materials for selected applications. One important factor for such a solution is the proper choice of coating material, which can be synthesized by physical or chemical vapor deposition techniques. Within this work an overview of state-of-the-art coated carbide grades is presented and application examples are shown to demonstrate their high efficiency. Machining processes for a material range from cast iron, low carbon steels to high alloyed steels are covered.
Entity recognition in the biomedical domain using a hybrid approach.
Basaldella, Marco; Furrer, Lenz; Tasso, Carlo; Rinaldi, Fabio
2017-11-09
This article describes a high-recall, high-precision approach for the extraction of biomedical entities from scientific articles. The approach uses a two-stage pipeline, combining a dictionary-based entity recognizer with a machine-learning classifier. First, the OGER entity recognizer, which has a bias towards high recall, annotates the terms that appear in selected domain ontologies. Subsequently, the Distiller framework uses this information as a feature for a machine learning algorithm to select the relevant entities only. For this step, we compare two different supervised machine-learning algorithms: Conditional Random Fields and Neural Networks. In an in-domain evaluation using the CRAFT corpus, we test the performance of the combined systems when recognizing chemicals, cell types, cellular components, biological processes, molecular functions, organisms, proteins, and biological sequences. Our best system combines dictionary-based candidate generation with Neural-Network-based filtering. It achieves an overall precision of 86% at a recall of 60% on the named entity recognition task, and a precision of 51% at a recall of 49% on the concept recognition task. These results are to our knowledge the best reported so far in this particular task.
A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.
S K, Somasundaram; P, Alli
2017-11-09
The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).
Artificial intelligence in sports on the example of weight training.
Novatchkov, Hristo; Baca, Arnold
2013-01-01
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.
Artificial Intelligence in Sports on the Example of Weight Training
Novatchkov, Hristo; Baca, Arnold
2013-01-01
The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722
Hydraulic Fatigue-Testing Machine
NASA Technical Reports Server (NTRS)
Hodo, James D.; Moore, Dennis R.; Morris, Thomas F.; Tiller, Newton G.
1987-01-01
Fatigue-testing machine applies fluctuating tension to number of specimens at same time. When sample breaks, machine continues to test remaining specimens. Series of tensile tests needed to determine fatigue properties of materials performed more rapidly than in conventional fatigue-testing machine.
Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L
2013-12-01
The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Parental attitudes towards soft drink vending machines in high schools.
Hendel-Paterson, Maia; French, Simone A; Story, Mary
2004-10-01
Soft drink vending machines are available in 98% of US high schools. However, few data are available about parents' opinions regarding the availability of soft drink vending machines in schools. Six focus groups with 33 parents at three suburban high schools were conducted to describe the perspectives of parents regarding soft drink vending machines in their children's high school. Parents viewed the issue of soft drink vending machines as a matter of their children's personal choice more than as an issue of a healthful school environment. However, parents were unaware of many important details about the soft drink vending machines in their children's school, such as the number and location of machines, hours of operation, types of beverages available, or whether the school had contracts with soft drink companies. Parents need more information about the number of soft drink vending machines at their children's school, the beverages available, the revenue generated by soft drink vending machine sales, and the terms of any contracts between the school and soft drink companies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, J.I.; King, C.
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less
Solving the Cauchy-Riemann equations on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1987-01-01
Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.
Prakash, Rangasamy; Krishnaraj, Vijayan; Zitoune, Redouane; Sheikh-Ahmad, Jamal
2016-01-01
Carbon fiber reinforced polymers (CFRPs) have found wide-ranging applications in numerous industrial fields such as aerospace, automotive, and shipping industries due to their excellent mechanical properties that lead to enhanced functional performance. In this paper, an experimental study on edge trimming of CFRP was done with various cutting conditions and different geometry of tools such as helical-, fluted-, and burr-type tools. The investigation involves the measurement of cutting forces for the different machining conditions and its effect on the surface quality of the trimmed edges. The modern cutting tools (router tools or burr tools) selected for machining CFRPs, have complex geometries in cutting edges and surfaces, and therefore a traditional method of direct tool wear evaluation is not applicable. An acoustic emission (AE) sensing was employed for on-line monitoring of the performance of router tools to determine the relationship between AE signal and length of machining for different kinds of geometry of tools. The investigation showed that the router tool with a flat cutting edge has better performance by generating lower cutting force and better surface finish with no delamination on trimmed edges. The mathematical modeling for the prediction of cutting forces was also done using Artificial Neural Network and Regression Analysis. PMID:28773919
Whole-machine calibration approach for phased array radar with self-test
NASA Astrophysics Data System (ADS)
Shen, Kai; Yao, Zhi-Cheng; Zhang, Jin-Chang; Yang, Jian
2017-06-01
The performance of the missile-borne phased array radar is greatly influenced by the inter-channel amplitude and phase inconsistencies. In order to ensure its performance, the amplitude and the phase characteristics of radar should be calibrated. Commonly used methods mainly focus on antenna calibration, such as FFT, REV, etc. However, the radar channel also contains T / R components, channels, ADC and messenger. In order to achieve on-based phased array radar amplitude information for rapid machine calibration and compensation, we adopt a high-precision plane scanning test platform for phase amplitude test. A calibration approach for the whole channel system based on the radar frequency source test is proposed. Finally, the advantages and the application prospect of this approach are analysed.
Comprehensive decision tree models in bioinformatics.
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.
Comprehensive Decision Tree Models in Bioinformatics
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
A consideration of the operation of automatic production machines.
Hoshi, Toshiro; Sugimoto, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation - operation for which quick performance is required (operation that is not permitted to be delayed) - and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as "asymmetric on the time-axis". Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis.
A consideration of the operation of automatic production machines
HOSHI, Toshiro; SUGIMOTO, Noboru
2015-01-01
At worksites, various automatic production machines are in use to release workers from muscular labor or labor in the detrimental environment. On the other hand, a large number of industrial accidents have been caused by automatic production machines. In view of this, this paper considers the operation of automatic production machines from the viewpoint of accident prevention, and points out two types of machine operation − operation for which quick performance is required (operation that is not permitted to be delayed) − and operation for which composed performance is required (operation that is not permitted to be performed in haste). These operations are distinguished by operation buttons of suitable colors and shapes. This paper shows that these characteristics are evaluated as “asymmetric on the time-axis”. Here, in order for workers to accept the risk of automatic production machines, it is preconditioned in general that harm should be sufficiently small or avoidance of harm is easy. In this connection, this paper shows the possibility of facilitating the acceptance of the risk of automatic production machines by enhancing the asymmetric on the time-axis. PMID:25739898
Ringo: Interactive Graph Analytics on Big-Memory Machines
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2016-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215
An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.
Liu, Yuzhe; Gopalakrishnan, Vanathi
2017-03-01
Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.
Ringo: Interactive Graph Analytics on Big-Memory Machines.
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2015-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.
In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less
Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes
NASA Astrophysics Data System (ADS)
Chan, V. S.; Costley, A. E.; Wan, B. N.; Garofalo, A. M.; Leuer, J. A.
2015-02-01
This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher BT and Ip can result in a high gain Qfus ˜ 12, Pfus ˜ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target.
FTAPE: A fault injection tool to measure fault tolerance
NASA Technical Reports Server (NTRS)
Tsai, Timothy K.; Iyer, Ravishankar K.
1995-01-01
The paper introduces FTAPE (Fault Tolerance And Performance Evaluator), a tool that can be used to compare fault-tolerant computers. The tool combines system-wide fault injection with a controllable workload. A workload generator is used to create high stress conditions for the machine. Faults are injected based on this workload activity in order to ensure a high level of fault propagation. The errors/fault ratio and performance degradation are presented as measures of fault tolerance.
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair
2018-04-01
The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.
Applied physiology of cycling.
Faria, I E
1984-01-01
Historically, the bicycle has evolved through the stages of a machine for efficient human transportation, a toy for children, a finely-tuned racing machine, and a tool for physical fitness development, maintenance and testing. Recently, major strides have been made in the aerodynamic design of the bicycle. These innovations have resulted in new land speed records for human powered machines. Performance in cycling is affected by a variety of factors, including aerobic and anaerobic capacity, muscular strength and endurance, and body composition. Bicycle races range from a 200m sprint to approximately 5000km. This vast range of competitive racing requires special attention to the principle of specificity of training. The physiological demands of cycling have been examined through the use of bicycle ergometers, rollers, cycling trainers, treadmill cycling, high speed photography, computer graphics, strain gauges, electromyography, wind tunnels, muscle biopsy, and body composition analysis. These techniques have been useful in providing definitive data for the development of a work/performance profile of the cyclist. Research evidence strongly suggests that when measuring the cyclist's aerobic or anaerobic capacity, a cycling protocol employing a high pedalling rpm should be used. The research bicycle should be modified to resemble a racing bicycle and the cyclist should wear cycling shoes. Prolonged cycling requires special nutritional considerations. Ingestion of carbohydrates, in solid form and carefully timed, influences performance. Caffeine appears to enhance lipid metabolism. Injuries, particularly knee problems which are prevalent among cyclists, may be avoided through the use of proper gearing and orthotics. Air pollution has been shown to impair physical performance. When pollution levels are high, training should be altered or curtailed. Effective training programmes simulate competitive conditions. Short and long interval training, blended with long distance tempo cycling, will exploit both the anaerobic and aerobic systems. Strength training, to be effective, must be performed with the specific muscle groups used in cycling, and at specific angles of involvement.
Design, fabrication, and operation of a test rig for high-speed tapered-roller bearings
NASA Technical Reports Server (NTRS)
Signer, H. R.
1974-01-01
A tapered-roller bearing test machine was designed, fabricated and successfully operated at speeds to 20,000 rpm. Infinitely variable radial loads to 26,690 N (6,000 lbs.) and thrust loads to 53,380 N (12,000 lbs.) can be applied to test bearings. The machine instrumentation proved to have the accuracy and reliability required for parametric bearing performance testing and has the capability of monitoring all programmed test parameters at continuous operation during life testing. This system automatically shuts down a test if any important test parameter deviates from the programmed conditions, or if a bearing failure occurs. A lubrication system was developed as an integral part of the machine, capable of lubricating test bearings by external jets and by means of passages feeding through the spindle and bearing rings into the critical internal bearing surfaces. In addition, provisions were made for controlled oil cooling of inner and outer rings to effect the type of bearing thermal management that is required when testing at high speeds.
Micro-machined resonator oscillator
Koehler, Dale R.; Sniegowski, Jeffry J.; Bivens, Hugh M.; Wessendorf, Kurt O.
1994-01-01
A micro-miniature resonator-oscillator is disclosed. Due to the miniaturization of the resonator-oscillator, oscillation frequencies of one MHz and higher are utilized. A thickness-mode quartz resonator housed in a micro-machined silicon package and operated as a "telemetered sensor beacon" that is, a digital, self-powered, remote, parameter measuring-transmitter in the FM-band. The resonator design uses trapped energy principles and temperature dependence methodology through crystal orientation control, with operation in the 20-100 MHz range. High volume batch-processing manufacturing is utilized, with package and resonator assembly at the wafer level. Unique design features include squeeze-film damping for robust vibration and shock performance, capacitive coupling through micro-machined diaphragms allowing resonator excitation at the package exterior, circuit integration and extremely small (0.1 in. square) dimensioning. A family of micro-miniature sensor beacons is also disclosed with widespread applications as bio-medical sensors, vehicle status monitors and high-volume animal identification and health sensors. The sensor family allows measurement of temperatures, chemicals, acceleration and pressure. A microphone and clock realization is also available.
Implementation and performance of parallel Prolog interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, S.; Kale, L.V.; Balkrishna, R.
1988-01-01
In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.
Travnik, Jaden B; Pilarski, Patrick M
2017-07-01
Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.
NASA Astrophysics Data System (ADS)
Akkermans, J. A. G.; Di Mitri, S.; Douglas, D.; Setija, I. D.
2017-08-01
High gain free electron lasers (FELs) driven by high repetition rate recirculating accelerators have received considerable attention in the scientific and industrial communities in recent years. Cost-performance optimization of such facilities encourages limiting machine size and complexity, and a compact machine can be realized by combining bending and bunch length compression during the last stage of recirculation, just before lasing. The impact of coherent synchrotron radiation (CSR) on electron beam quality during compression can, however, limit FEL output power. When methods to counteract CSR are implemented, appropriate beam diagnostics become critical to ensure that the target beam parameters are met before lasing, as well as to guarantee reliable, predictable performance and rapid machine setup and recovery. This article describes a beam line for bunch compression and recirculation, and beam switchyard accessing a diagnostic line for EUV lasing at 1 GeV beam energy. The footprint is modest, with 12 m compressive arc diameter and ˜20 m diagnostic line length. The design limits beam quality degradation due to CSR both in the compressor and in the switchyard. Advantages and drawbacks of two switchyard lines providing, respectively, off-line and on-line measurements are discussed. The entire design is scalable to different beam energies and charges.
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1990-01-01
Investigations of one- and two-dimensional (1- or 2-D) simulations of Stirling machines centered around experimental data generated by the U. of Minnesota Mechanical Engineering Test Rig (METR) are covered. This rig was used to investigate oscillating flows about a zero mean with emphasis on laminar/turbulent flow transitions in tubes. The Space Power Demonstrator Engine (SPDE) and in particular, its heater, were the subjects of the simulations. The heater was treated as a 1- or 2-D entity in an otherwise 1-D system. The 2-D flow effects impacted the transient flow predictions in the heater itself but did not have a major impact on overall system performance. Information propagation effects may be a significant issue in the simulation (if not the performance) of high-frequency, high-pressure Stirling machines. This was investigated further by comparing a simulation against an experimentally validated analytic solution for the fluid dynamics of a transmission line. The applicability of the pressure-linking algorithm for compressible flows may be limited by characteristic number (defined as flow path information traverses per cycle); this warrants further study. Lastly the METR was simulated in 1- and 2-D. A two-parameter k-w foldback function turbulence model was developed and tested against a limited set of METR experimental data.
System-Level Virtualization for High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian
2008-01-01
System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing themore » machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.« less
Weldability of Weldalite (tm) 049 with and without TiB2 reinforcement
NASA Technical Reports Server (NTRS)
1991-01-01
The effects are assessed of TiB2 reinforcement and parent alloy Li content on the weldability of Weldalite (tm) 049 type alloys. Welding trials were performed using either AC or DC polarity gas tungsten arc (GTA) welding according to described procedures. The welding was performed under conditions of high restraint on 5 cm (2 in) wide x 25.4 cm (10 in) long plates machined from the 0.952 cm (0.375 in) extruded bar parallel to the extrusion direction. A 37.5 deg bevel was machined on the center edge of the extruded bar. Cut rod filler wire was machined from three alloys, and one commercially available 2319 filler wire was also used. The preliminary assessment of the weldability revealed no propensity for hot cracking under conditions of high restraint. This result is significant, because hot cracking has been reported for all other leading aluminum lithium alloys welded with certain conventional filler alloys. The strengths for Weldalite parent welded with parent filler obtained were higher than those for alloys used in launch systems, such as 2219 and 2014 welded with 2319 and 4043 fillers, respectively. Even higher values were obtained by variable polarity plasma arc welding (e.g., 54 ksi (372 MPa) mean tensile strength).
Design of Ultra-High-Power-Density Machine Optimized for Future Aircraft
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.
2004-01-01
The NASA Glenn Research Center's Structural Mechanics and Dynamics Branch is developing a compact, nonpolluting, bearingless electric machine with electric power supplied by fuel cells for future "more-electric" aircraft with specific power in the projected range of 50 hp/lb, whereas conventional electric machines generate usually 0.2 hp/lb. The use of such electric drives for propulsive fans or propellers depends on the successful development of ultra-high-power-density machines. One possible candidate for such ultra-high-power-density machines, a round-rotor synchronous machine with an engineering current density as high as 20,000 A/sq cm, was selected to investigate how much torque and power can be produced.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
NASA Astrophysics Data System (ADS)
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.
Big Data: Next-Generation Machines for Big Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hack, James J.; Papka, Michael E.
Addressing the scientific grand challenges identified by the US Department of Energy’s (DOE’s) Office of Science’s programs alone demands a total leadership-class computing capability of 150 to 400 Pflops by the end of this decade. The successors to three of the DOE’s most powerful leadership-class machines are set to arrive in 2017 and 2018—the products of the Collaboration Oak Ridge Argonne Livermore (CORAL) initiative, a national laboratory–industry design/build approach to engineering nextgeneration petascale computers for grand challenge science. These mission-critical machines will enable discoveries in key scientific fields such as energy, biotechnology, nanotechnology, materials science, and high-performance computing, and servemore » as a milestone on the path to deploying exascale computing capabilities.« less
Machine learning for neuroimaging with scikit-learn.
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.
Machine learning for neuroimaging with scikit-learn
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388
Support Vector Machine-Based Endmember Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Anthony M; Archibald, Richard K
Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less
The portals 4.0.1 network programming interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin
2013-04-01
This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less
Re-designing a mechanism for higher speed: A case history from textile machinery
NASA Astrophysics Data System (ADS)
Douglas, S. S.; Rooney, G. T.
The generation of general mechanism design software which is the formulation of suitable objective functions is discussed. There is a consistent drive towards higher speeds in the development of industrial sewing machines. This led to experimental analyses of dynamic performance and to a search for improved design methods. The experimental work highlighted the need for smoothness of motion at high speed, component inertias, and frame structural stiffness. Smoothness is associated with transmission properties and harmonic analysis. These are added to other design requirements of synchronization, mechanism size, and function. Some of the mechanism trains in overedte sewing machines are shown. All these trains are designed by digital optimization. The design software combines analysis of the sewing machine mechanisms, formulation of objectives innumerical terms, and suitable mathematical optimization ttechniques.
Method of Optimizing the Construction of Machining, Assembly and Control Devices
NASA Astrophysics Data System (ADS)
Iordache, D. M.; Costea, A.; Niţu, E. L.; Rizea, A. D.; Babă, A.
2017-10-01
Industry dynamics, driven by economic and social requirements, must generate more interest in technological optimization, capable of ensuring a steady development of advanced technical means to equip machining processes. For these reasons, the development of tools, devices, work equipment and control, as well as the modernization of machine tools, is the certain solution to modernize production systems that require considerable time and effort. This type of approach is also related to our theoretical, experimental and industrial applications of recent years, presented in this paper, which have as main objectives the elaboration and use of mathematical models, new calculation methods, optimization algorithms, new processing and control methods, as well as some structures for the construction and configuration of technological equipment with a high level of performance and substantially reduced costs..
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayman, Ken J; Ade, Brian J; Weber, Charles F
High-dimensional, nonlinear function estimation using large datasets is a current area of interest in the machine learning community, and applications may be found throughout the analytical sciences, where ever-growing datasets are making more information available to the analyst. In this paper, we leverage the existing relevance vector machine, a sparse Bayesian version of the well-studied support vector machine, and expand the method to include integrated feature selection and automatic function shaping. These innovations produce an algorithm that is able to distinguish variables that are useful for making predictions of a response from variables that are unrelated or confusing. We testmore » the technology using synthetic data, conduct initial performance studies, and develop a model capable of making position-independent predictions of the coreaveraged burnup using a single specimen drawn randomly from a nuclear reactor core.« less
Design and Performance Improvement of AC Machines Sharing a Common Stator
NASA Astrophysics Data System (ADS)
Guo, Lusu
With the increasing demand on electric motors in various industrial applications, especially electric powered vehicles (electric cars, more electric aircrafts and future electric ships and submarines), both synchronous reluctance machines (SynRMs) and interior permanent magnet (IPM) machines are recognized as good candidates for high performance variable speed applications. Developing a single stator design which can be used for both SynRM and IPM motors is a good way to reduce manufacturing and maintenance cost. SynRM can be used as a low cost solution for many electric driving applications and IPM machines can be used in power density crucial circumstances or work as generators to meet the increasing demand for electrical power on board. In this research, SynRM and IPM machines are designed sharing a common stator structure. The prototype motors are designed with the aid of finite element analysis (FEA). Machine performances with different stator slot and rotor pole numbers are compared by FEA. An 18-slot, 4-pole structure is selected based on the comparison for this prototype design. Sometimes, torque pulsation is the major drawback of permanent magnet synchronous machines. There are several sources of torque pulsations, such as back-EMF distortion, inductance variation and cogging torque due to presence of permanent magnets. To reduce torque pulsations in permanent magnet machines, all the efforts can be classified into two categories: one is from the design stage, the structure of permanent magnet machines can be optimized with the aid of finite element analysis. The other category of reducing torque pulsation is after the permanent magnet machine has been manufactured or the machine structure cannot be changed because of other reasons. The currents fed into the permanent magnet machine can be controlled to follow a certain profile which will make the machine generate a smoother torque waveform. Torque pulsation reduction methods in both categories will be discussed in this dissertation. In the design stage, an optimization method based on orthogonal experimental design will be introduced. Besides, a universal current profiling technique is proposed to minimize the torque pulsation along with the stator copper losses in modular interior permanent magnet motors. Instead of sinusoidal current waveforms, this algorithm will calculate the proper currents which can minimize the torque pulsation. Finite element analysis and Matlab programing will be used to develop this optimal current profiling algorithm. Permanent magnet machines are becoming more attractive in some modern traction applications, such as traction motors and generators for an electrified vehicle. The operating speed or the load condition in these applications may be changing all the time. Compared to electric machines used to operate at a constant speed and constant load, better control performance is required. In this dissertation, a novel model reference adaptive control (MRAC) used on five-phase interior permanent magnet motor drives is presented. The primary controller is designed based on artificial neural network (ANN) to simulate the nonlinear characteristics of the system without knowledge of accurate motor model or parameters. The proposed motor drive decouples the torque and flux components of five-phase IPM motors by applying a multiple reference frame transformation. Therefore, the motor can be easily driven below the rated speed with the maximum torque per ampere (MTPA) operation or above the rated speed with the flux weakening operation. The ANN based primary controller consists of a radial basis function (RBF) network which is trained on-line to adapt system uncertainties. The complete IPM motor drive is simulated in Matlab/Simulink environment and implemented experimentally utilizing dSPACE DS1104 DSP board on a five-phase prototype IPM motor. The proposed model reference adaptive control method has been applied on the commons stator SynRM and IPM machine as well.
Design and application of electromechanical actuators for deep space missions
NASA Technical Reports Server (NTRS)
Haskew, Tim A.; Wander, John
1993-01-01
The annual report Design and Application of Electromechanical Actuators for Deep Space Missions is presented. The reporting period is 16 Aug. 1992 to 15 Aug. 1993. However, the primary focus will be work performed since submission of our semi-annual progress report in Feb. 1993. Substantial progress was made. We currently feel confident in providing guidelines for motor and control strategy selection in electromechanical actuators to be used in thrust vector control (TVC) applications. A small portion was presented in the semi-annual report. At this point, we have implemented highly detailed simulations of various motor/drive systems. The primary motor candidates were the brushless dc machine, permanent magnet synchronous machine, and the induction machine. The primary control implementations were pulse width modulation and hysteresis current control. Each of the two control strategies were applied to each of the three motor choices. With either pulse width modulation or hysteresis current control, the induction machine was always vector controlled. A standard test position command sequence for system performance evaluation is defined. Currently, we are gathering all of the necessary data for formal presentation of the results. Briefly stated for TVC application, we feel that the brushless dc machine operating under PWM current control is the best option. Substantial details on the topic, with supporting simulation results, will be provided later, in the form of a technical paper prepared for submission and also in the next progress report with more detail than allowed for paper publication.
Gradient boosting machine for modeling the energy consumption of commercial buildings
Touzani, Samir; Granderson, Jessica; Fernandes, Samuel
2017-11-26
Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less
Gradient boosting machine for modeling the energy consumption of commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touzani, Samir; Granderson, Jessica; Fernandes, Samuel
Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradientmore » boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.« less
Peng, Bo; Wang, Suhong; Zhou, Zhiyong; Liu, Yan; Tong, Baotong; Zhang, Tao; Dai, Yakang
2017-06-09
Machine learning methods have been widely used in recent years for detection of neuroimaging biomarkers in regions of interest (ROIs) and assisting diagnosis of neurodegenerative diseases. The innovation of this study is to use multilevel-ROI-features-based machine learning method to detect sensitive morphometric biomarkers in Parkinson's disease (PD). Specifically, the low-level ROI features (gray matter volume, cortical thickness, etc.) and high-level correlative features (connectivity between ROIs) are integrated to construct the multilevel ROI features. Filter- and wrapper- based feature selection method and multi-kernel support vector machine (SVM) are used in the classification algorithm. T1-weighted brain magnetic resonance (MR) images of 69 PD patients and 103 normal controls from the Parkinson's Progression Markers Initiative (PPMI) dataset are included in the study. The machine learning method performs well in classification between PD patients and normal controls with an accuracy of 85.78%, a specificity of 87.79%, and a sensitivity of 87.64%. The most sensitive biomarkers between PD patients and normal controls are mainly distributed in frontal lobe, parental lobe, limbic lobe, temporal lobe, and central region. The classification performance of our method with multilevel ROI features is significantly improved comparing with other classification methods using single-level features. The proposed method shows promising identification ability for detecting morphometric biomarkers in PD, thus confirming the potentiality of our method in assisting diagnosis of the disease. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mazlan, Mohamed Mubin Aizat; Sulaiman, Erwan; Husin, Zhafir Aizat; Othman, Syed Muhammad Naufal Syed; Khan, Faisal
2015-05-01
In hybrid excitation machines (HEMs), there are two main flux sources which are permanent magnet (PM) and field excitation coil (FEC). These HEMs have better features when compared with the interior permanent magnet synchronous machines (IPMSM) used in conventional hybrid electric vehicles (HEVs). Since all flux sources including PM, FEC and armature coils are located on the stator core, the rotor becomes a single piece structure similar with switch reluctance machine (SRM). The combined flux generated by PM and FEC established more excitation fluxes that are required to produce much higher torque of the motor. In addition, variable DC FEC can control the flux capabilities of the motor, thus the machine can be applied for high-speed motor drive system. In this paper, the comparisons of single-phase 8S-4P outer and inner rotor hybrid excitation flux switching machine (HEFSM) are presented. Initially, design procedures of the HEFSM including parts drawing, materials and conditions setting, and properties setting are explained. Flux comparisons analysis is performed to investigate the flux capabilities at various current densities. Then the flux linkages of PM with DC FEC of various DC FEC current densities are examined. Finally torque performances are analyzed at various armature and FEC current densities for both designs. As a result, the outer-rotor HEFSM has higher flux linkage of PM with DC FEC and higher average torque of approximately 10% when compared with inner-rotor HEFSM.
Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A
2017-01-01
Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265
Optimization of Cvd Diamond Coating Type on Micro Drills in Pcb Machining
NASA Astrophysics Data System (ADS)
Lei, X. L.; He, Y.; Sun, F. H.
2016-12-01
The demand for better tools for machining printed circuit boards (PCBs) is increasing due to the extensive usage of these boards in digital electronic products. This paper is aimed at optimizing coating type on micro drills in order to extend their lifetime in PCB machining. First, the tribotests involving micro crystalline diamond (MCD), nano crystalline diamond (NCD) and bare tungsten carbide (WC-Co) against PCBs show that NCD-PCB tribopair exhibits the lowest friction coefficient (0.35) due to the unique nano structure and low surface roughness of NCD films. Thereafter, the dry machining performance of the MCD- and NCD-coated micro drills on PCBs is systematically studied, using diamond-like coating (DLC) and TiAlN-coated micro drills as comparison. The experiments show that the working lives of these micro drills can be ranked as: NCD>TiAlN>DLC>MCD>bare WC-Co. The superior cutting performance of NCD-coated micro drills in terms of the lowest flank wear growth rate, no tool degradation (e.g. chipping, tool tipping) appearance, the best hole quality as well as the lowest feed force may come from the excellent wear resistance, lower friction coefficient against PCB as well as the high adhesive strength on the underneath substrate of NCD films.
Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom
2016-01-01
The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex. PMID:27500640
Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom
2016-01-01
The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex.
Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Li, Li-Ping; Huang, De-Shuang; Yan, Gui-Ying; Nie, Ru; Huang, Yu-An
2017-04-04
Identification of protein-protein interactions (PPIs) is of critical importance for deciphering the underlying mechanisms of almost all biological processes of cell and providing great insight into the study of human disease. Although much effort has been devoted to identifying PPIs from various organisms, existing high-throughput biological techniques are time-consuming, expensive, and have high false positive and negative results. Thus it is highly urgent to develop in silico methods to predict PPIs efficiently and accurately in this post genomic era. In this article, we report a novel computational model combining our newly developed discriminative vector machine classifier (DVM) and an improved Weber local descriptor (IWLD) for the prediction of PPIs. Two components, differential excitation and orientation, are exploited to build evolutionary features for each protein sequence. The main characteristics of the proposed method lies in introducing an effective feature descriptor IWLD which can capture highly discriminative evolutionary information from position-specific scoring matrixes (PSSM) of protein data, and employing the powerful and robust DVM classifier. When applying the proposed method to Yeast and H. pylori data sets, we obtained excellent prediction accuracies as high as 96.52% and 91.80%, respectively, which are significantly better than the previous methods. Extensive experiments were then performed for predicting cross-species PPIs and the predictive results were also pretty promising. To further validate the performance of the proposed method, we compared it with the state-of-the-art support vector machine (SVM) classifier on Human data set. The experimental results obtained indicate that our method is highly effective for PPIs prediction and can be taken as a supplementary tool for future proteomics research.
An investigation of chatter and tool wear when machining titanium
NASA Technical Reports Server (NTRS)
Sutherland, I. A.
1974-01-01
The low thermal conductivity of titanium, together with the low contact area between chip and tool and the unusually high chip velocities, gives rise to high tool tip temperatures and accelerated tool wear. Machining speeds have to be considerably reduced to avoid these high temperatures with a consequential loss of productivity. Restoring this lost productivity involves increasing other machining variables, such as feed and depth-of-cut, and can lead to another machining problem commonly known as chatter. This work is to acquaint users with these problems, to examine the variables that may be encountered when machining a material like titanium, and to advise the machine tool user on how to maximize the output from the machines and tooling available to him. Recommendations are made on ways of improving tolerances, reducing machine tool instability or chatter, and improving productivity. New tool materials, tool coatings, and coolants are reviewed and their relevance examined when machining titanium.
Machining approach of freeform optics on infrared materials via ultra-precision turning.
Li, Zexiao; Fang, Fengzhou; Chen, Jinjin; Zhang, Xiaodong
2017-02-06
Optical freeform surfaces are of great advantage in excellent optical performance and integrated alignment features. It has wide applications in illumination, imaging and non-imaging, etc. Machining freeform surfaces on infrared (IR) materials with ultra-precision finish is difficult due to its brittle nature. Fast tool servo (FTS) assisted diamond turning is a powerful technique for the realization of freeform optics on brittle materials due to its features of high spindle speed and high cutting speed. However it has difficulties with large slope angles and large rise-and-falls in the sagittal direction. In order to overcome this defect, the balance of the machining quality on the freeform surface and the brittle nature in IR materials should be realized. This paper presents the design of a near-rotational freeform surface (NRFS) with a low non-rotational degree (NRD) to constraint the variation of traditional freeform optics to solve this issue. In NRFS, the separation of the surface results in a rotational part and a residual part denoted as a non-rotational surface (NRS). Machining NRFS on germanium is operated by FTS diamond turning. Characteristics of the surface indicate that the optical finish of the freeform surface has been achieved. The modulation transfer function (MTF) of the freeform optics shows a good agreement to the design expectation. Images of the final optical system confirm that the fabricating strategy is of high efficiency and high quality. Challenges and prospects are discussed to provide guidance of future work.
ERIC Educational Resources Information Center
Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.
This document, which reflects Mississippi's statutory requirement that instructional programs be based on core curricula and performance-based assessment, contains outlines of the instructional units required in local instructional management plans and daily lesson plans for machine tool operation/machine shop I and II. Presented first are a…
Performance study of a data flow architecture
NASA Technical Reports Server (NTRS)
Adams, George
1985-01-01
Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Statistical Machine Learning for Structured and High Dimensional Data
2014-09-17
AFRL-OSR-VA-TR-2014-0234 STATISTICAL MACHINE LEARNING FOR STRUCTURED AND HIGH DIMENSIONAL DATA Larry Wasserman CARNEGIE MELLON UNIVERSITY Final...Re . 8-98) v Prescribed by ANSI Std. Z39.18 14-06-2014 Final Dec 2009 - Aug 2014 Statistical Machine Learning for Structured and High Dimensional...area of resource-constrained statistical estimation. machine learning , high-dimensional statistics U U U UU John Lafferty 773-702-3813 > Research under
NASA Technical Reports Server (NTRS)
Waterman, A. W.; Huxford, R. L.; Nelson, W. G.
1976-01-01
Molded high temperature plastic first and second stage rod seal elements were evaluated in seal assemblies to determine performance characteristics. These characteristics were compared with the performance of machined seal elements. The 6.35 cm second stage Chevron seal assembly was tested using molded Chevrons fabricated from five molding materials. Impulse screening tests conducted over a range of 311 K to 478 K revealed thermal setting deficiencies in the aromatic polyimide molding materials. Seal elements fabricated from aromatic copolyester materials structurally failed during impulse cycle calibration. Endurance testing of 3.85 million cycles at 450 K using MIL-H-83283 fluid showed poorer seal performance with the unfilled aromatic polyimide material than had been attained with seals machined from Vespel SP-21 material. The 6.35 cm first stage step-cut compression loaded seal ring fabricated from copolyester injection molding material failed structurally during impulse cycle calibration. Molding of complex shape rod seals was shown to be a potentially controllable technique, but additional molding material property testing is recommended.
Banknotes and unattended cash transactions
NASA Astrophysics Data System (ADS)
Bernardini, Ronald R.
2000-04-01
There is a 64 billion dollar annual unattended cash transaction business in the US with 10 to 20 million daily transactions. Even small problems with the machine readability of banknotes can quickly become a major problem to the machine manufacturer and consumer. Traditional note designs incorporate overt security features for visual validation by the public. Many of these features such as fine line engraving, microprinting and watermarks are unsuitable as machine readable features in low cost note acceptors. Current machine readable features, mostly covert, were designed and implemented with the central banks in mind. These features are only usable by the banks large, high speed currency sorting and validation equipment. New note designs should consider and provide for low cost not acceptors, implementing features developed for inexpensive sensing technologies. Machine readable features are only as good as their consistency. Quality of security features as well as that of the overall printing process must be maintained to ensure reliable and secure operation of note readers. Variations in printing and of the components used to make the note are one of the major causes of poor performance in low cost note acceptors. The involvement of machine manufacturers in new currency designs will aid note producers in the design of a note that is machine friendly, helping to secure the acceptance of the note by the public as well as acting asa deterrent to fraud.
NASA Astrophysics Data System (ADS)
Jyothi, P. N.; Susmitha, M.; Sharan, P.
2017-04-01
Cutting fluids are used in machining industries for improving tool life, reducing work piece and thermal deformation, improving surface finish and flushing away chips from the cutting zone. Although the application of cutting fluids increases the tool life and Machining efficiency, but it has many major problems related to environmental impacts and health hazards along with recycling & disposal. These problems gave provision for the introduction of mineral, vegetable and animal oils. These oils play an important role in improving various machining properties, including corrosion protection, lubricity, antibacterial protection, even emulsibility and chemical stability. Compared to mineral oils, vegetable oils in general possess high viscosity index, high flash point, high lubricity and low evaporative losses. Vegetable oils can be edible or non-edible oils and Various researchers have proved that edible vegetable oils viz., palm oil, coconut oil, canola oil, soya bean oil can be effectively used as eco-friendly cutting fluid in machining operations. But in present situations harnessing edible oils for lubricants formation restricts the use due to increased demands of growing population worldwide and availability. In the present work, Non-edible vegetable oil like Neem and Honge are been used as cutting fluid for drilling of Mild steel and its effect on cutting temperature, hardness and surface roughness are been investigated. Results obtained are compared with SAE 20W40 (petroleum based cutting fluid)and dry cutting condition.
Identification of Tool Wear when Machining of Austenitic Steels and Titatium by Miniature Machining
NASA Astrophysics Data System (ADS)
Pilc, Jozef; Kameník, Roman; Varga, Daniel; Martinček, Juraj; Sadilek, Marek
2016-12-01
Application of miniature machining is currently rapidly increasing mainly in biomedical industry and machining of hard-to-machine materials. Machinability of materials with increased level of toughness depends on factors that are important in the final state of surface integrity. Because of this, it is necessary to achieve high precision (varying in microns) in miniature machining. If we want to guarantee machining high precision, it is necessary to analyse tool wear intensity in direct interaction with given machined materials. During long-term cutting process, different cutting wedge deformations occur, leading in most cases to a rapid wear and destruction of the cutting wedge. This article deal with experimental monitoring of tool wear intensity during miniature machining.
Reactor operations informal monthly report, May 1, 1995--May 31, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauptman, H.M.; Petro, J.N.; Jacobi, O.
1995-05-01
This document is an informal progress report for the operational performance of the Brookhaven Medical Research Reactor, and the Brookhaven High Flux Beam Reactor, for the month of May, 1995. Both machines ran well during this period, with no reportable instrumentation problems, all scheduled maintenance performed, and only one reportable occurance, involving a particle on Vest Button, Personnel Radioactive Contamination.
High-speed machining of Space Shuttle External Tank (ET) panels
NASA Technical Reports Server (NTRS)
Miller, J. A.
1983-01-01
Potential production rates and project cost savings achieved by converting the conventional machining process in manufacturing shuttle external tank panels to high speed machining (HSM) techniques were studied. Savings were projected from the comparison of current production rates with HSM rates and with rates attainable on new conventional machines. The HSM estimates were also based on rates attainable by retrofitting existing conventional equipment with high speed spindle motors and rates attainable using new state of the art machines designed and built for HSM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angers, Crystal Plume; Bottema, Ryan; Buckley, Les
Purpose: Treatment unit uptime statistics are typically used to monitor radiation equipment performance. The Ottawa Hospital Cancer Centre has introduced the use of Quality Control (QC) test success as a quality indicator for equipment performance and overall health of the equipment QC program. Methods: Implemented in 2012, QATrack+ is used to record and monitor over 1100 routine machine QC tests each month for 20 treatment and imaging units ( http://qatrackplus.com/ ). Using an SQL (structured query language) script, automated queries of the QATrack+ database are used to generate program metrics such as the number of QC tests executed and themore » percentage of tests passing, at tolerance or at action. These metrics are compared against machine uptime statistics already reported within the program. Results: Program metrics for 2015 show good correlation between pass rate of QC tests and uptime for a given machine. For the nine conventional linacs, the QC test success rate was consistently greater than 97%. The corresponding uptimes for these units are better than 98%. Machines that consistently show higher failure or tolerance rates in the QC tests have lower uptimes. This points to either poor machine performance requiring corrective action or to problems with the QC program. Conclusions: QATrack+ significantly improves the organization of QC data but can also aid in overall equipment management. Complimenting machine uptime statistics with QC test metrics provides a more complete picture of overall machine performance and can be used to identify areas of improvement in the machine service and QC programs.« less
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation
NASA Astrophysics Data System (ADS)
Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad
2017-12-01
Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
Topologies for three-phase wound-field salient rotor switched-flux machines for HEV applications
NASA Astrophysics Data System (ADS)
Khan, Faisal; Sulaiman, Erwan; Ahmad, Md Zarafi; Husin, Zhafir Aizat; Mazlan, Mohamed Mubin Aizat
2015-05-01
Wound-field switched-flux machines (WFSFM) have an intrinsic simplicity and high speed that make them well suited to many hybrid electric vehicle (HEV) applications. However, overlap armature and field windings raised the copper losses in these machines. Furthermore, in previous design segmented-rotor is used which made the rotor less robust. To overcome these problems, this paper presents novel topologies for three-phase wound-field switched-flux machines. Both armature and field winding are located on the stator and rotor is composed of only stack of iron. Non-overlap armature and field windings and toothed-rotor are the clear advantages of these topologies as the copper losses gets reduce and rotor becomes more robust. Design feasibility and performance analysis of 12 slots and different rotor pole numbers are examined on the basis of coil arrangement test, peak armature flux linkage, back emf, cogging torque and average torque by using Finite Element Analysis(FEA).
Practical Framework: Implementing OEE Method in Manufacturing Process Environment
NASA Astrophysics Data System (ADS)
Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.
2016-02-01
Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.
Method and apparatus for monitoring machine performance
Smith, Stephen F.; Castleberry, Kimberly N.
1996-01-01
Machine operating conditions can be monitored by analyzing, in either the time or frequency domain, the spectral components of the motor current. Changes in the electric background noise, induced by mechanical variations in the machine, are correlated to changes in the operating parameters of the machine.
High-Performance Medium- & Heavy-Duty Vehicles | Transportation Research |
, as is a range of charging technology options. A study compared a wireless-power-transfer-enabled plug , and Doug DeVoto of NREL's Power Electronics and Electric Machines research group were part of the
NASA Astrophysics Data System (ADS)
Adeyeri, Michael Kanisuru; Mpofu, Khumbulani
2017-06-01
The article is centred on software system development for manufacturing company that produces polyethylene bags using mostly conventional machines in a competitive world where each business enterprise desires to stand tall. This is meant to assist in gaining market shares, taking maintenance and production decisions by the dynamism and flexibilities embedded in the package as customers' demand varies under the duress of meeting the set goals. The production and machine condition monitoring software (PMCMS) is programmed in C# and designed in such a way to support hardware integration, real-time machine conditions monitoring, which is based on condition maintenance approach, maintenance decision suggestions and suitable production strategies as the demand for products keeps changing in a highly competitive environment. PMCMS works with an embedded device which feeds it with data from the various machines being monitored at the workstation, and the data are read at the base station through transmission via a wireless transceiver and stored in a database. A case study was used in the implementation of the developed system, and the results show that it can monitor the machine's health condition effectively by displaying machines' health status, gives repair suggestions to probable faults, decides strategy for both production methods and maintenance, and, thus, can enhance maintenance performance obviously.
Using machine learning algorithms to guide rehabilitation planning for home care clients.
Zhu, Mu; Zhang, Zhanyang; Hirdes, John P; Stolee, Paul
2007-12-20
Targeting older clients for rehabilitation is a clinical challenge and a research priority. We investigate the potential of machine learning algorithms - Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) - to guide rehabilitation planning for home care clients. This study is a secondary analysis of data on 24,724 longer-term clients from eight home care programs in Ontario. Data were collected with the RAI-HC assessment system, in which the Activities of Daily Living Clinical Assessment Protocol (ADLCAP) is used to identify clients with rehabilitation potential. For study purposes, a client is defined as having rehabilitation potential if there was: i) improvement in ADL functioning, or ii) discharge home. SVM and KNN results are compared with those obtained using the ADLCAP. For comparison, the machine learning algorithms use the same functional and health status indicators as the ADLCAP. The KNN and SVM algorithms achieved similar substantially improved performance over the ADLCAP, although false positive and false negative rates were still fairly high (FP > .18, FN > .34 versus FP > .29, FN. > .58 for ADLCAP). Results are used to suggest potential revisions to the ADLCAP. Machine learning algorithms achieved superior predictions than the current protocol. Machine learning results are less readily interpretable, but can also be used to guide development of improved clinical protocols.
Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Yamada, Masako
The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less
Identifying Wrist Fracture Patients with High Accuracy by Automatic Categorization of X-ray Reports
de Bruijn, Berry; Cranney, Ann; O’Donnell, Siobhan; Martin, Joel D.; Forster, Alan J.
2006-01-01
The authors performed this study to determine the accuracy of several text classification methods to categorize wrist x-ray reports. We randomly sampled 751 textual wrist x-ray reports. Two expert reviewers rated the presence (n = 301) or absence (n = 450) of an acute fracture of wrist. We developed two information retrieval (IR) text classification methods and a machine learning method using a support vector machine (TC-1). In cross-validation on the derivation set (n = 493), TC-1 outperformed the two IR based methods and six benchmark classifiers, including Naive Bayes and a Neural Network. In the validation set (n = 258), TC-1 demonstrated consistent performance with 93.8% accuracy; 95.5% sensitivity; 92.9% specificity; and 87.5% positive predictive value. TC-1 was easy to implement and superior in performance to the other classification methods. PMID:16929046
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Saini, Subbash; Ciotti, Robert
2006-01-01
We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.
Using GPS to evaluate productivity and performance of forest machine systems
Steven E. Taylor; Timothy P. McDonald; Matthew W. Veal; Ton E. Grift
2001-01-01
This paper reviews recent research and operational applications of using GPS as a tool to help monitor the locations, travel patterns, performance, and productivity of forest machines. The accuracy of dynamic GPS data collected on forest machines under different levels of forest canopy is reviewed first. Then, the paper focuses on the use of GPS for monitoring forest...
NASA Astrophysics Data System (ADS)
Bhaumik, Munmun; Maity, Kalipada
Powder mixed electro discharge machining (PMEDM) is further advancement of conventional electro discharge machining (EDM) where the powder particles are suspended in the dielectric medium to enhance the machining rate as well as surface finish. Cryogenic treatment is introduced in this process for improving the tool life and cutting tool properties. In the present investigation, the characterization of the cryotreated tempered electrode was performed. An attempt has been made to study the effect of cryotreated double tempered electrode on the radial overcut (ROC) when SiC powder is mixed in the kerosene dielectric during electro discharge machining of AISI 304. The process performance has been evaluated by means of ROC when peak current, pulse on time, gap voltage, duty cycle and powder concentration are considered as process parameters and machining is performed by using tungsten carbide electrodes (untreated and double tempered electrodes). A regression analysis was performed to correlate the data between the response and the process parameters. Microstructural analysis was carried out on the machined surfaces. Least radial overcut was observed for conventional EDM as compared to powder mixed EDM. Cryotreated double tempered electrode significantly reduced the radial overcut than untreated electrode.
Influence of the Cutting Conditions in the Surface Finishing of Turned Pieces of Titanium Alloys
NASA Astrophysics Data System (ADS)
Huerta, M.; Arroyo, P.; Sánchez Carrilero, M.; Álvarez, M.; Salguero, J.; Marcos, M.
2009-11-01
Titanium is a material that, despite its high cost, is increasingly being introduced in the aerospace industry due to both, its weight, its mechanical properties and its corrosion potential, very close to that of carbon fiber based composite material. This fact allows using Ti to form Fiber Metal Laminates Machining operations are usually used in the manufacturing processes of Ti based aerospace structural elements. These elements must be machined under high surface finish requirements. Previous works have shown the relationship between the surface roughness and the tool changes in the first instants of turning processes. From these results, new tests have been performed in an aeronautical factory, in order to analyse roughness in final pieces.
Articulated, Performance-Based Instruction Objectives Guide for Machine Shop Technology.
ERIC Educational Resources Information Center
Henderson, William Edward, Jr., Ed.
This articulation guide contains 21 units of instruction for two years of machine shop. The objectives of the program are to provide the student with the basic terminology and fundamental knowledge and skills in machining (year 1) and to teach him/her to set up and operate machine tools and make or repair metal parts, tools, and machines (year 2).…
Marks, Michał; Glinicki, Michał A.; Gibas, Karolina
2015-01-01
The aim of the study was to generate rules for the prediction of the chloride resistance of concrete modified with high calcium fly ash using machine learning methods. The rapid chloride permeability test, according to the Nordtest Method Build 492, was used for determining the chloride ions’ penetration in concrete containing high calcium fly ash (HCFA) for partial replacement of Portland cement. The results of the performed tests were used as the training set to generate rules describing the relation between material composition and the chloride resistance. Multiple methods for rule generation were applied and compared. The rules generated by algorithm J48 from the Weka workbench provided the means for adequate classification of plain concretes and concretes modified with high calcium fly ash as materials of good, acceptable or unacceptable resistance to chloride penetration. PMID:28793740
Performance of solar refrigerant ejector refrigerating machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Khalidy, N.A.H.
1997-12-31
In this work a detailed analysis for the ideal, theoretical, and experimental performance of a solar refrigerant ejector refrigerating machine is presented. A comparison of five refrigerants to select a desirable one for the system is made. The theoretical analysis showed that refrigerant R-113 is more suitable for use in the system. The influence of the boiler, condenser, and evaporator temperatures on system performance is investigated experimentally in a refrigerant ejector refrigerating machine using R-113 as a working refrigerant.
Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke
2015-01-01
In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.
Gao, Zhong-Ke
2015-01-01
In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, R.N.
1990-02-28
The Inspection Shop at Lawrence Livermore Lab recently purchased a Sheffield Apollo RS50 Direct Computer Control Coordinate Measuring Machine. The performance of the machine was specified to conform to B89 standard which relies heavily upon using the measuring machine in its intended manner to verify its accuracy (rather than parametric tests). Although it would be possible to use the interactive measurement system to perform these tasks, a more thorough and efficient job can be done by creating Function Library programs for certain tasks which integrate Hewlett-Packard Basic 5.0 language and calls to proprietary analysis and machine control routines. This combinationmore » provides efficient use of the measuring machine with a minimum of keyboard input plus an analysis of the data with respect to the B89 Standard rather than a CMM analysis which would require subsequent interpretation. This paper discusses some characteristics of the Sheffield machine control and analysis software and my use of H-P Basic language to create automated measurement programs to support the B89 performance evaluation of the CMM. 1 ref.« less
Machine learning for outcome prediction of acute ischemic stroke post intra-arterial therapy.
Asadi, Hamed; Dowling, Richard; Yan, Bernard; Mitchell, Peter
2014-01-01
Stroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke. We conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data. We included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408). We showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.
NASA Astrophysics Data System (ADS)
Hiremath, Vijaykumar; Badiger, Pradeep; Auradi, V.; Dundur, S. T.; Kori, S. A.
2016-02-01
Amongst advanced materials, metal matrix composites (MMC) are gaining importance as materials for structural applications in particular, particulate reinforced aluminium MMCs have received considerable attention due to their superior properties such as high strength to weight ratio, excellent low-temperature performance, high wear resistance, high thermal conductivity. The present study aims at studying and comparing the machinability aspects of B4Cp reinforced 6061Al alloy metal matrix composites reinforced with 37μm and 88μm particulates produced by stir casting method. The micro structural characterization of the prepared composites is done using Scanning Electron Microscopy equipped with EDX analysis (Hitachi Su-1500 model) to identify morphology and distribution of B4C particles in the 6061Al matrix. The specimens are turned on a conventional lathe machine using a Polly crystalline Diamond (PCD) tool to study the effect of particle size on the cutting forces and the surface roughness under varying machinability parameters viz., Cutting speed (29-45 m/min.), Feed rate (0.11-0.33 mm/rev.) and depth of cut (0.5-1mm). Results of micro structural characterization revealed fairly uniform distribution of B4C particles (in both cases i.e., 37μm and 88μm) in 6061Al matrix. The surface roughness of the composite is influenced by cutting speed. The feed rate and depth of cut have a negative influence on surface roughness. The cutting forces decreased with increase in cutting speed whereas cutting forces increased with increase in feed and depth of cut. Higher cutting forces are noticed while machining Al6061 base alloy compared to reinforced composites. Surface finish is high during turning of the 6061Al base alloy and surface roughness is high with 88μm size particle reinforced composites. As the particle size increases Surface roughness also increases.
Machine learning in heart failure: ready for prime time.
Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish
2018-03-01
The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
FINAL REPORT. DOE Grant Award Number DE-SC0004062
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiesa, Luisa
With the support of the DOE-OFES Early Career Award and the Tufts startup support the PI has developed experimental and analytical expertise on the electromechanical characterization of Low Temperature Superconductor (LTS) and High Temperature Superconductor (HTS) for high magnetic field applications. These superconducting wires and cables are used in fusion and high-energy physics magnet applications. In a short period of time, the PI has built a laboratory and research group with unique capabilities that include both experimental and numerical modeling effort to improve the design and performance of superconducting cables and magnets. All the projects in the PI’s laboratory exploremore » the fundamental electromechanical behavior of superconductors but the types of materials, geometries and operating conditions are chosen to be directly relevant to real machines, in particular fusion machines like ITER.« less