Sample records for custom computing machine

  1. Using Machine Learning and Data Analysis to Improve Customer Acquisition and Marketing in Residential Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sigrin, Benjamin O

    High customer acquisition costs remain a persistent challenge in the U.S. residential solar industry. Effective customer acquisition in the residential solar market is increasingly achieved with the help of data analysis and machine learning, whether that means more targeted advertising, understanding customer motivations, or responding to competitors. New research by the National Renewable Energy Laboratory, Sandia National Laboratories, Vanderbilt University, University of Pennsylvania, and the California Center for Sustainable Energy and funded through the U.S. Department of Energy's Solar Energy Evolution and Diffusion (SEEDS) program demonstrates novel computational methods that can help drive down costs in the residential solar industry.

  2. Custom hip prostheses by integrating CAD and casting technology

    NASA Astrophysics Data System (ADS)

    Silva, Pedro F.; Leal, Nuno; Neto, Rui J.; Lino, F. Jorge; Reis, Ana

    2012-09-01

    Total Hip Arthroplasty (THA) is a surgical intervention that is being achieving high rates of success, leaving room to research on long run durability, patient comfort and costs reduction. Even so, up to the present, little research has been done to improve the method of manufacturing customized prosthesis. The common customized prostheses are made by full machining. This document presents a different approach methodology which combines the study of medical images, through CAD (Computer Aided Design) software, SLadditive manufacturing, ceramic shell manufacture, precision foundry with Titanium alloys and Computer Aided Manufacturing (CAM). The goal is to achieve the best comfort for the patient, stress distribution and the maximum lifetime of the prosthesis produced by this integrated methodology. The way to achieve this desiderate is to make custom hip prosthesis which are adapted to each patient needs and natural physiognomy. Not only the process is reliable, but also represents a cost reduction comparing to the conventional full machined custom hip prosthesis.

  3. Metric Use in the Tool Industry. A Status Report and a Test of Assessment Methodology.

    DTIC Science & Technology

    1982-04-20

    Weights and Measures) CIM - Computer-Integrated Manufacturing CNC - Computer Numerical Control DOD - Department of Defense DODISS - DOD Index of...numerically-controlled ( CNC ) machines that have an inch-millimeter selection switch and a corresponding dual readout scale. S -4- The use of both metric...satisfactorily met the demands of both domestic and foreign customers for metric machine tools by providing either metric- capable machines or NC and CNC

  4. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  5. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  6. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  7. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  8. 25 CFR 542.13 - What are the minimum internal control standards for gaming machines?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    .... (j) Player tracking system. (1) The following standards apply if a player tracking system is utilized... image on the computer screen; (B) Comparing the customer to image on customer's picture ID; or (C...

  9. Diamond turning machine controller implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrard, K.P.; Taylor, L.W.; Knight, B.F.

    The standard controller for a Pnuemo ASG 2500 Diamond Turning Machine, an Allen Bradley 8200, has been replaced with a custom high-performance design. This controller consists of four major components. Axis position feedback information is provided by a Zygo Axiom 2/20 laser interferometer with 0.1 micro-inch resolution. Hardware interface logic couples the computers digital and analog I/O channels to the diamond turning machine`s analog motor controllers, the laser interferometer, and other machine status and control information. It also provides front panel switches for operator override of the computer controller and implement the emergency stop sequence. The remaining two components, themore » control computer hardware and software, are discussed in detail below.« less

  10. Employment Opportunities for the Handicapped in Programmable Automation.

    ERIC Educational Resources Information Center

    Swift, Richard; Leneway, Robert

    A Computer Integrated Manufacturing System may make it possible for severely disabled people to custom design, machine, and manufacture either wood or metal parts. Programmable automation merges computer aided design, computer aided manufacturing, computer aided engineering, and computer integrated manufacturing systems with automated production…

  11. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  12. Motorcycle Diaries

    ERIC Educational Resources Information Center

    Gibbs, Hope J.

    2005-01-01

    This article relates the experiences of Jeff Fischer, an instructor in the Computer Integrated Machining department at South Central College (SCC) in North Mankato, Minnesota. Facing dwindling student enrollment and possible departmental budget costs, Fischer was able to turn his passion for custom-built cycles and the intricate machining that…

  13. Navigating the Challenges of the Cloud

    ERIC Educational Resources Information Center

    Ovadia, Steven

    2010-01-01

    Cloud computing is increasingly popular in education. Cloud computing is "the delivery of computer services from vast warehouses of shared machines that enables companies and individuals to cut costs by handing over the running of their email, customer databases or accounting software to someone else, and then accessing it over the internet."…

  14. A Talking Computers System for Persons with Vision and Speech Handicaps. Final Report.

    ERIC Educational Resources Information Center

    Visek & Maggs, Urbana, IL.

    This final report contains a detailed description of six software systems designed to assist individuals with blindness and/or speech disorders in using inexpensive, off-the-shelf computers rather than expensive custom-made devices. The developed software is not written in the native machine language of any particular brand of computer, but in the…

  15. New layer-based imaging and rapid prototyping techniques for computer-aided design and manufacture of custom dental restoration.

    PubMed

    Lee, M-Y; Chang, C-C; Ku, Y C

    2008-01-01

    Fixed dental restoration by conventional methods greatly relies on the skill and experience of the dental technician. The quality and accuracy of the final product depends mostly on the technician's subjective judgment. In addition, the traditional manual operation involves many complex procedures, and is a time-consuming and labour-intensive job. Most importantly, no quantitative design and manufacturing information is preserved for future retrieval. In this paper, a new device for scanning the dental profile and reconstructing 3D digital information of a dental model based on a layer-based imaging technique, called abrasive computer tomography (ACT) was designed in-house and proposed for the design of custom dental restoration. The fixed partial dental restoration was then produced by rapid prototyping (RP) and computer numerical control (CNC) machining methods based on the ACT scanned digital information. A force feedback sculptor (FreeForm system, Sensible Technologies, Inc., Cambridge MA, USA), which comprises 3D Touch technology, was applied to modify the morphology and design of the fixed dental restoration. In addition, a comparison of conventional manual operation and digital manufacture using both RP and CNC machining technologies for fixed dental restoration production is presented. Finally, a digital custom fixed restoration manufacturing protocol integrating proposed layer-based dental profile scanning, computer-aided design, 3D force feedback feature modification and advanced fixed restoration manufacturing techniques is illustrated. The proposed method provides solid evidence that computer-aided design and manufacturing technologies may become a new avenue for custom-made fixed restoration design, analysis, and production in the 21st century.

  16. A semi-automated process for the production of custom-made shoes

    NASA Technical Reports Server (NTRS)

    Farmer, Franklin H.

    1991-01-01

    A more efficient, cost-effective and timely way of designing and manufacturing custom footware is needed. A potential solution to this problem lies in the use of computer-aided design and manufacturing (CAD/CAM) techniques in the production of custom shoes. A prototype computer-based system was developed, and the system is primarily a software entity which directs and controls a 3-D scanner, a lathe or milling machine, and a pattern-cutting machine to produce the shoe last and the components to be assembled into a shoe. The steps in this process are: (1) scan the surface of the foot to obtain a 3-D image; (2) thin the foot surface data and create a tiled wire model of the foot; (3) interactively modify the wire model of the foot to produce a model of the shoe last; (4) machine the last; (5) scan the surface of the last and verify that it correctly represents the last model; (6) design cutting patterns for shoe uppers; (7) cut uppers; (8) machine an inverse mold for the shoe innersole/sole combination; (9) mold the innersole/sole; and (10) assemble the shoe. For all its capabilities, this system still requires the direction and assistance of skilled operators, and shoemakers to assemble the shoes. Currently, the system is running on a SUN3/260 workstation with TAAC application accelerator. The software elements of the system are written in either Fortran or C and run under a UNIX operator system.

  17. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  18. Distributed communications and control network for robotic mining

    NASA Technical Reports Server (NTRS)

    Schiffbauer, William H.

    1989-01-01

    The application of robotics to coal mining machines is one approach pursued to increase productivity while providing enhanced safety for the coal miner. Toward that end, a network composed of microcontrollers, computers, expert systems, real time operating systems, and a variety of program languages are being integrated that will act as the backbone for intelligent machine operation. Actual mining machines, including a few customized ones, have been given telerobotic semiautonomous capabilities by applying the described network. Control devices, intelligent sensors and computers onboard these machines are showing promise of achieving improved mining productivity and safety benefits. Current research using these machines involves navigation, multiple machine interaction, machine diagnostics, mineral detection, and graphical machine representation. Guidance sensors and systems employed include: sonar, laser rangers, gyroscopes, magnetometers, clinometers, and accelerometers. Information on the network of hardware/software and its implementation on mining machines are presented. Anticipated coal production operations using the network are discussed. A parallelism is also drawn between the direction of present day underground coal mining research to how the lunar soil (regolith) may be mined. A conceptual lunar mining operation that employs a distributed communication and control network is detailed.

  19. Engineering specification and system design for CAD/CAM of custom shoes: UMC project effort

    NASA Technical Reports Server (NTRS)

    Bao, Han P.

    1991-01-01

    The goal of this project is to supplement the footwear design system of North Carolina State University (NCSU) with a software module to design and manufacture a combination sole. The four areas of concentration were: customization of NASCAD (NASA Computer Aided Design) to the footwear project; use of CENCIT data; computer aided manufacturing activities; and beginning work for the bottom elements of shoes. The task of generating a software module for producing a sole was completed with a demonstrated product realization. The software written in C was delivered to NCSU for inclusion in their design system for custom footwear known as LASTMOD. The machining process of the shoe last was improved using a spiral tool path approach.

  20. Scanning Electron Microscopy Analysis of the Adaptation of Single-Unit Screw-Retained Computer-Aided Design/Computer-Aided Manufacture Abutments After Mechanical Cycling.

    PubMed

    Markarian, Roberto Adrian; Galles, Deborah Pedroso; Gomes França, Fabiana Mantovani

    To measure the microgap between dental implants and custom abutments fabricated using different computer-aided design/computer-aided manufacture (CAD/CAM) methods before and after mechanical cycling. CAD software (Dental System, 3Shape) was used to design a custom abutment for a single-unit, screw-retained crown compatible with a 4.1-mm external hexagon dental implant. The resulting stereolithography file was sent for manufacturing using four CAD/CAM methods (n = 40): milling and sintering of zirconium dioxide (ZO group), cobalt-chromium (Co-Cr) sintered via selective laser melting (SLM group), fully sintered machined Co-Cr alloy (MM group), and machined and sintered agglutinated Co-Cr alloy powder (AM group). Prefabricated titanium abutments (TI group) were used as controls. Each abutment was placed on a dental implant measuring 4.1× 11 mm (SA411, SIN) inserted into an aluminum block. Measurements were taken using scanning electron microscopy (SEM) (×4,000) on four regions of the implant-abutment interface (IAI) and at a relative distance of 90 degrees from each other. The specimens were mechanically aged (1 million cycles, 2 Hz, 100 N, 37°C) and the IAI width was measured again using the same approach. Data were analyzed using two-way analysis of variance, followed by the Tukey test. After mechanical cycling, the best adaptation results were obtained from the TI (2.29 ± 1.13 μm), AM (3.58 ± 1.80 μm), and MM (1.89 ± 0.98 μm) groups. A significantly worse adaptation outcome was observed for the SLM (18.40 ± 20.78 μm) and ZO (10.42 ± 0.80 μm) groups. Mechanical cycling had a marked effect only on the AM specimens, which significantly increased the microgap at the IAI. Custom abutments fabricated using fully sintered machined Co-Cr alloy and machined and sintered agglutinated Co-Cr alloy powder demonstrated the best adaptation results at the IAI, similar to those obtained with commercial prefabricated titanium abutments after mechanical cycling. The adaptation of custom abutments made by means of SLM or milling and sintering of zirconium dioxide were worse both before and after mechanical cycling.

  1. Cybernetic Serendipity: Behind the Paradox of Machine Assisted Art Lies a Boundless World of Creativity.

    ERIC Educational Resources Information Center

    Peterson, Dale

    1984-01-01

    Discusses the works of Darcy Gerbarg, Ruth Leavitt, David Em, Duane Palyka, and Harold Cohen, visual artists who work with computers to create art works by relying on standard hardware/software tools, using custom tools created for nonartistic tasks, manipulating images at the programing level, and programing creativity into computers themselves.…

  2. Programmable Pulse-Position-Modulation Encoder

    NASA Technical Reports Server (NTRS)

    Zhu, David; Farr, William

    2006-01-01

    A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.

  3. Development of a QFD-based expert system for CNC turning centre selection

    NASA Astrophysics Data System (ADS)

    Prasad, Kanika; Chakraborty, Shankar

    2015-12-01

    Computer numerical control (CNC) machine tools are automated devices capable of generating complicated and intricate product shapes in shorter time. Selection of the best CNC machine tool is a critical, complex and time-consuming task due to availability of a wide range of alternatives and conflicting nature of several evaluation criteria. Although, the past researchers had attempted to select the appropriate machining centres using different knowledge-based systems, mathematical models and multi-criteria decision-making methods, none of those approaches has given due importance to the voice of customers. The aforesaid limitation can be overcome using quality function deployment (QFD) technique, which is a systematic approach for integrating customers' needs and designing the product to meet those needs first time and every time. In this paper, the adopted QFD-based methodology helps in selecting CNC turning centres for a manufacturing organization, providing due importance to the voice of customers to meet their requirements. An expert system based on QFD technique is developed in Visual BASIC 6.0 to automate the CNC turning centre selection procedure for different production plans. Three illustrative examples are demonstrated to explain the real-time applicability of the developed expert system.

  4. On the impact of approximate computation in an analog DeSTIN architecture.

    PubMed

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  5. Personal customizing exercise with a wearable measurement and control unit.

    PubMed

    Wang, Zhihui; Kiryu, Tohru; Tamura, Naoki

    2005-06-28

    Recently, wearable technology has been used in various health-related fields to develop advanced monitoring solutions. However, the monitoring function alone cannot meet all the requirements of customizing machine-based exercise on an individual basis by relying on biosignal-based controls. We propose a new wearable unit design equipped with measurement and control functions to support the customization process. The wearable unit can measure the heart rate and electromyogram signals during exercise performance and output workload control commands to the exercise machines. The workload is continuously tracked with exercise programs set according to personally customized workload patterns and estimation results from the measured biosignals by a fuzzy control method. Exercise programs are adapted by relying on a computer workstation, which communicates with the wearable unit via wireless connections. A prototype of the wearable unit was tested together with an Internet-based cycle ergometer system to demonstrate that it is possible to customize exercise on an individual basis. We tested the wearable unit in nine people to assess its suitability to control cycle ergometer exercise. The results confirmed that the unit could successfully control the ergometer workload and continuously support gradual changes in physical activities. The design of wearable units equipped with measurement and control functions is an important step towards establishing a convenient and continuously supported wellness environment.

  6. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    PubMed

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  7. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025

  8. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.

  9. Applying CBR to machine tool product configuration design oriented to customer requirements

    NASA Astrophysics Data System (ADS)

    Wang, Pengjia; Gong, Yadong; Xie, Hualong; Liu, Yongxian; Nee, Andrew Yehching

    2017-01-01

    Product customization is a trend in the current market-oriented manufacturing environment. However, deduction from customer requirements to design results and evaluation of design alternatives are still heavily reliant on the designer's experience and knowledge. To solve the problem of fuzziness and uncertainty of customer requirements in product configuration, an analysis method based on the grey rough model is presented. The customer requirements can be converted into technical characteristics effectively. In addition, an optimization decision model for product planning is established to help the enterprises select the key technical characteristics under the constraints of cost and time to serve the customer to maximal satisfaction. A new case retrieval approach that combines the self-organizing map and fuzzy similarity priority ratio method is proposed in case-based design. The self-organizing map can reduce the retrieval range and increase the retrieval efficiency, and the fuzzy similarity priority ratio method can evaluate the similarity of cases comprehensively. To ensure that the final case has the best overall performance, an evaluation method of similar cases based on grey correlation analysis is proposed to evaluate similar cases to select the most suitable case. Furthermore, a computer-aided system is developed using MATLAB GUI to assist the product configuration design. The actual example and result on an ETC series machine tool product show that the proposed method is effective, rapid and accurate in the process of product configuration. The proposed methodology provides a detailed instruction for the product configuration design oriented to customer requirements.

  10. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  11. When Machines Think: Radiology's Next Frontier.

    PubMed

    Dreyer, Keith J; Geis, J Raymond

    2017-12-01

    Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.

  12. Design and milling manufacture of polyurethane custom contoured cushions for wheelchair users.

    PubMed

    da Silva, Fabio Pinto; Beretta, Elisa Marangon; Prestes, Rafael Cavalli; Kindlein Junior, Wilson

    2011-01-01

    The design of custom contoured cushions manufactured in flexible polyurethane foams is an option to improve positioning and comfort for people with disabilities that spend most of the day seated in the same position. These surfaces increase the contact area between the seat and the user. This fact contributes to minimise the local pressures that can generate problems like decubitus ulcers. The present research aims at establishing development routes for custom cushion production to wheelchair users. This study also contributes to the investigation of Computer Numerical Control (CNC) machining of flexible polyurethane foams. The proposed route to obtain the customised seat began with acquiring the user's contour in adequate posture through plaster cast. To collect the surface geometry, the cast was three-dimensionally scanned and manipulated in CAD/CAM software. CNC milling parameters such as tools, spindle speeds and feed rates to machine flexible polyurethane foams were tested. These parameters were analysed regarding the surface quality. The best parameters were then tested in a customised seat. The possible dimensional changes generated during foam cutting were analysed through 3D scanning. Also, the customised seat pressure and temperature distribution was tested. The best parameters found for foams with a density of 50kg/cm(3) were high spindle speeds (24000 rpm) and feed rates between 2400-4000mm/min. Those parameters did not generate significant deformities in the machined cushions. The custom contoured cushion satisfactorily increased the contact area between wheelchair and user, as it distributed pressure and heat evenly. Through this study it was possible to define routes for the development and manufacturing of customised seats using direct CNC milling in flexible polyurethane foams. It also showed that custom contoured cushions efficiently distribute pressure and temperature, which is believed to minimise tissue lesions such as pressure ulcers.

  13. Machine learning action parameters in lattice quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William

    2018-05-01

    Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.

  14. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    PubMed Central

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  15. Personal customizing exercise with a wearable measurement and control unit

    PubMed Central

    Wang, Zhihui; Kiryu, Tohru; Tamura, Naoki

    2005-01-01

    Background Recently, wearable technology has been used in various health-related fields to develop advanced monitoring solutions. However, the monitoring function alone cannot meet all the requirements of customizing machine-based exercise on an individual basis by relying on biosignal-based controls. We propose a new wearable unit design equipped with measurement and control functions to support the customization process. Methods The wearable unit can measure the heart rate and electromyogram signals during exercise performance and output workload control commands to the exercise machines. The workload is continuously tracked with exercise programs set according to personally customized workload patterns and estimation results from the measured biosignals by a fuzzy control method. Exercise programs are adapted by relying on a computer workstation, which communicates with the wearable unit via wireless connections. A prototype of the wearable unit was tested together with an Internet-based cycle ergometer system to demonstrate that it is possible to customize exercise on an individual basis. Results We tested the wearable unit in nine people to assess its suitability to control cycle ergometer exercise. The results confirmed that the unit could successfully control the ergometer workload and continuously support gradual changes in physical activities. Conclusion The design of wearable units equipped with measurement and control functions is an important step towards establishing a convenient and continuously supported wellness environment. PMID:15982425

  16. Comparative study of manufacturing condyle implant using rapid prototyping and CNC machining

    NASA Astrophysics Data System (ADS)

    Bojanampati, S.; Karthikeyan, R.; Islam, MD; Venugopal, S.

    2018-04-01

    Injuries to the cranio-maxillofacial area caused by road traffic accidents (RTAs), fall from heights, birth defects, metabolic disorders and tumors affect a rising number of patients in the United Arab Emirates (UAE), and require maxillofacial surgery. Mandibular reconstruction poses a specific challenge in both functionality and aesthetics, and involves replacement of the damaged bone by a custom made implant. Due to material, design cycle time and manufacturing process time, such implants are in many instances not affordable to patients. In this paper, the feasibility of designing and manufacturing low-cost, custom made condyle implant is assessed using two different approaches, consisting of rapid prototyping and three-axis computer numerically controlled (CNC) machining. Two candidate rapid prototyping techniques are considered, namely fused deposition modeling (FDM) and three-dimensional printing followed by sand casting The feasibility of the proposed manufacturing processes is evaluated based on manufacturing time, cost, quality, and reliability.

  17. GRAPE project

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    2002-12-01

    We overview our GRAvity PipE (GRAPE) project to develop special-purpose computers for astrophysical N-body simulations. The basic idea of GRAPE is to attach a custom-build computer dedicated to the calculation of gravitational interaction between particles to a general-purpose programmable computer. By this hybrid architecture, we can achieve both a wide range of applications and very high peak performance. Our newest machine, GRAPE-6, achieved the peak speed of 32 Tflops, and sustained performance of 11.55 Tflops, for the total budget of about 4 million USD. We also discuss relative advantages of special-purpose and general-purpose computers and the future of high-performance computing for science and technology.

  18. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  19. Effects of virtualization on a scientific application - Running a hyperspectral radiative transfer code on virtual machines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J

    2008-01-01

    The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less

  20. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    NASA Astrophysics Data System (ADS)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.

  1. The Classification and Evaluation of Computer-Aided Software Engineering Tools

    DTIC Science & Technology

    1990-09-01

    International Business Machines Corporation Customizer is a Registered Trademark of Index Technology Corporation Data Analyst is a Registered Trademark of...years, a rapid series of new approaches have been adopted including: information engineering, entity- relationship modeling, automatic code generation...support true information sharing among tools and automated consistency checking. Moreover, the repository must record and manage the relationships and

  2. Machine learning action parameters in lattice quantum chromodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  3. Machine learning action parameters in lattice quantum chromodynamics

    DOE PAGES

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    2018-05-16

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  4. Best face forward.

    PubMed

    Rayport, Jeffrey F; Jaworski, Bernard J

    2004-12-01

    Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.

  5. Body-Machine Interface Enables People With Cervical Spinal Cord Injury to Control Devices With Available Body Movements: Proof of Concept.

    PubMed

    Abdollahi, Farnaz; Farshchiansadegh, Ali; Pierella, Camilla; Seáñez-González, Ismael; Thorp, Elias; Lee, Mei-Hua; Ranganathan, Rajiv; Pedersen, Jessica; Chen, David; Roth, Elliot; Casadio, Maura; Mussa-Ivaldi, Ferdinando

    2017-05-01

    This study tested the use of a customized body-machine interface (BoMI) for enhancing functional capabilities in persons with cervical spinal cord injury (cSCI). The interface allows people with cSCI to operate external devices by reorganizing their residual movements. This was a proof-of-concept phase 0 interventional nonrandomized clinical trial. Eight cSCI participants wore a custom-made garment with motion sensors placed on the shoulders. Signals derived from the sensors controlled a computer cursor. A standard algorithm extracted the combinations of sensor signals that best captured each participant's capacity for controlling a computer cursor. Participants practiced with the BoMI for 24 sessions over 12 weeks performing 3 tasks: reaching, typing, and game playing. Learning and performance were evaluated by the evolution of movement time, errors, smoothness, and performance metrics specific to each task. Through practice, participants were able to reduce the movement time and the distance from the target at the 1-second mark in the reaching task. They also made straighter and smoother movements while reaching to different targets. All participants became faster in the typing task and more skilled in game playing, as the pong hit rate increased significantly with practice. The results provide proof-of-concept for the customized BoMI as a means for people with absent or severely impaired hand movements to control assistive devices that otherwise would be manually operated.

  6. Customization of user interfaces to reduce errors and enhance user acceptance.

    PubMed

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. Development of a small-scale computer cluster

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  8. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.

  9. Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing

    NASA Astrophysics Data System (ADS)

    Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng

    1998-03-01

    This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.

  10. ORBIT: an integrated environment for user-customized bioinformatics tools.

    PubMed

    Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M

    1999-10-01

    There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.

  11. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  12. DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.

    PubMed

    Kim, Lok-Won

    2018-05-01

    Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).

  13. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  14. Individual titanium zygomatic implant

    NASA Astrophysics Data System (ADS)

    Nekhoroshev, M. V.; Ryabov, K. N.; Avdeev, E. V.

    2018-03-01

    Custom individual implants for the reconstruction of craniofacial defects have gained importance due to better qualitative characteristics over their generic counterparts – plates, which should be bent according to patient needs. The Additive Manufacturing of individual implants allows reducing cost and improving quality of implants. In this paper, the authors describe design of zygomatic implant models based on computed tomography (CT) data. The fabrication of the implants will be carried out with 3D printing by selective laser melting machine SLM 280HL.

  15. The Walter Reed performance assessment battery.

    PubMed

    Thorne, D R; Genser, S G; Sing, H C; Hegge, F W

    1985-01-01

    This paper describes technical details of a computerized psychological test battery designed for examining the effects of various state-variables on a representative sample of normal psychomotor, perceptual and cognitive tasks. The duration, number and type of tasks can be customized to different experimental needs, and then administered and analyzed automatically, at intervals as short as one hour. The battery can be run on either the Apple-II family of computers or on machines compatible with the IBM-PC.

  16. Custom-Machined Miniplates and Bone-Supported Guides for Orthognathic Surgery: A New Surgical Procedure.

    PubMed

    Brunso, Joan; Franco, Maria; Constantinescu, Thomas; Barbier, Luis; Santamaría, Joseba Andoni; Alvarez, Julio

    2016-05-01

    Several surgical strategies exist to improve accuracy in orthognathic surgery, but ideal planning and treatment have yet to be described. The purpose of this study was to present and assess the accuracy of a virtual orthognathic positioning system (OPS), based on the use of bone-supported guides for placement of custom, highly rigid, machined titanium miniplates produced using computer-aided design and computer-aided manufacturing technology. An institutional review board-approved prospective observational study was designed to evaluate our early experience with the OPS. The inclusion criteria were as follows: adult patients who were classified as skeletal Class II or III patients and as candidates for orthognathic surgery or who were candidates for maxillomandibular advancement as a treatment for obstructive sleep apnea. Reverse planning with computed tomography and modeling software was performed. Our OPS was designed to avoid the use of intermaxillary fixation and occlusal splints. The minimum follow-up period was 1 year. Six patients were enrolled in the study. The custom OPS miniplates fit perfectly with the anterior buttress of the maxilla and the mandible body surface intraoperatively. To evaluate accuracy, the postoperative 3-dimensional reconstructed computed tomography image and the presurgical plan were compared. In the maxillary fragments that underwent less than 6 mm of advancement, the OPS enabled an SD of 0.14 mm (92% within 1 mm) at the upper maxilla and 0.34 mm (86% within 1 mm) at the mandible. In the case of great advancements of more than 10 mm, the SD was 1.33 mm (66% within 1 mm) at the upper maxilla and 0.67 mm (73% within 1 mm) at the mandibular level. Our novel OPS was safe and well tolerated, providing positional control with considerable surgical accuracy. The OPS simplified surgery by being independent of support from the opposite maxilla and obviating the need for classic intermaxillary occlusal splints. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  18. VIEW OF MICROMACHINING, HIGH PRECISION EQUIPMENT USED TO CUSTOM MAKE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF MICRO-MACHINING, HIGH PRECISION EQUIPMENT USED TO CUSTOM MAKE SMALL PARTS. LUMPS OF CLAY; SHOWN IN THE PHOTOGRAPH, WERE USED TO STABILIZE PARTS BEING MACHINED. (11/1/87) - Rocky Flats Plant, Stainless Steel & Non-Nuclear Components Manufacturing, Southeast corner of intersection of Cottonwood & Third Avenues, Golden, Jefferson County, CO

  19. An expert fitness diagnosis system based on elastic cloud computing.

    PubMed

    Tseng, Kevin C; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  20. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  1. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  2. Alumina-zirconia machinable abutments for implant-supported single-tooth anterior crowns.

    PubMed

    Sadoun, M; Perelmuter, S

    1997-01-01

    Innovative materials and application techniques are constantly being developed in the ongoing search for improved restorations. This article describes a new material and the fabrication process of aesthetic machinable ceramic anterior implant abutments. The ceramic material utilized is a mixture of alumina (aluminum oxide) and ceria (cerium oxide) with partially stabilized zirconia (zirconium oxide). The initial core material is a cylinder with a 9-mm diameter and a 15-mm height, obtained by ceramic injection and presintering processes. The resultant alumina-zirconia core is porous and readily machinable. It is secured to the analog, and its design is customized by machining the abutment to suit the particular clinical circumstances. The machining is followed by glass infiltration, and the crown is finalized. The learning objective of this article is to gain a basic knowledge of the fabrication and clinical application of the custom machinable abutments.

  3. Spending at mobile fruit and vegetable carts and using SNAP benefits to pay, Bronx, New York, 2013 and 2014.

    PubMed

    Breck, Andrew; Kiszko, Kamila M; Abrams, Courtney; Elbel, Brian

    2015-06-04

    This study examines purchases at fruit and vegetable carts and evaluates the potential benefits of expanding the availability of electronic benefit transfer machines at Green Carts. Customers at 4 Green Carts in the Bronx, New York, were surveyed in 3 waves from June 2013 through July 2014. Customers who used Supplemental Nutrition Assistance Program benefits spent on average $3.86 more than customers who paid with cash. This finding suggests that there may be benefits to increasing the availability of electronic benefit transfer machines at Green Carts.

  4. Development of a wearable measurement and control unit for personal customizing machine-supported exercise.

    PubMed

    Wang, Zhihui; Tamura, Naoki; Kiryu, Tohru

    2005-01-01

    Wearable technology has been used in various health-related fields to develop advanced monitoring solutions. However, the monitoring function alone cannot meet all the requirements of personal customizing machine-supported exercise that have biosignal-based controls. In this paper, we propose a new wearable unit design equipped with measurement and control functions to support the personal customization process. The wearable unit can measure the heart rate and electromyogram signals during exercise and output workload control commands to the exercise machines. We then applied a prototype of the wearable unit to an Internet-based cycle ergometer system. The wearable unit was examined using twelve young people to check its feasibility. The results verified that the unit could successfully adapt to the control of the workload and was effective for continuously supporting gradual changes in physical activities.

  5. The laser micro-machining system for diamond anvil cell experiments and general precision machining applications at the High Pressure Collaborative Access Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrubiak, Rostislav; Sinogeikin, Stanislav; Rod, Eric

    We have designed and constructed a new system for micro-machining parts and sample assemblies used for diamond anvil cells and general user operations at the High Pressure Collaborative Access Team, sector 16 of the Advanced Photon Source. The new micro-machining system uses a pulsed laser of 400 ps pulse duration, ablating various materials without thermal melting, thus leaving a clean edge. With optics designed for a tight focus, the system can machine holes any size larger than 3 μm in diameter. Unlike a standard electrical discharge machining drill, the new laser system allows micro-machining of non-conductive materials such as: amorphousmore » boron and silicon carbide gaskets, diamond, oxides, and other materials including organic materials such as polyimide films (i.e., Kapton). An important feature of the new system is the use of gas-tight or gas-flow environmental chambers which allow the laser micro-machining to be done in a controlled (e.g., inert gas) atmosphere to prevent oxidation and other chemical reactions in air sensitive materials. The gas-tight workpiece enclosure is also useful for machining materials with known health risks (e.g., beryllium). Specialized control software with a graphical interface enables micro-machining of custom 2D and 3D shapes. The laser-machining system was designed in a Class 1 laser enclosure, i.e., it includes laser safety interlocks and computer controls and allows for routine operation. Though initially designed mainly for machining of the diamond anvil cell gaskets, the laser-machining system has since found many other micro-machining applications, several of which are presented here.« less

  6. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  7. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  8. Design Control Systems of Human Machine Interface in the NTVS-2894 Seat Grinder Machine to Increase the Productivity

    NASA Astrophysics Data System (ADS)

    Ardi, S.; Ardyansyah, D.

    2018-02-01

    In the Manufacturing of automotive spare parts, increased sales of vehicles is resulted in increased demand for production of engine valve of the customer. To meet customer demand, we carry out improvement and overhaul of the NTVS-2894 seat grinder machine on a machining line. NTVS-2894 seat grinder machine has been decreased machine productivity, the amount of trouble, and the amount of downtime. To overcome these problems on overhaul the NTVS-2984 seat grinder machine include mechanical and programs, is to do the design and manufacture of HMI (Human Machine Interface) GP-4501T program. Because of the time prior to the overhaul, NTVS-2894 seat grinder machine does not have a backup HMI (Human Machine Interface) program. The goal of the design and manufacture in this program is to improve the achievement of production, and allows an operator to operate beside it easier to troubleshoot the NTVS-2894 seat grinder machine thereby reducing downtime on the NTVS-2894 seat grinder machine. The results after the design are HMI program successfully made it back, machine productivity increased by 34.8%, the amount of trouble, and downtime decreased 40% decrease from 3,160 minutes to 1,700 minutes. The implication of our design, it could facilitate the operator in operating machine and the technician easer to maintain and do the troubleshooting the machine problems.

  9. Electronic Nose and Electronic Tongue

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Nabarun; Bandhopadhyay, Rajib

    Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.

  10. Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks

    DTIC Science & Technology

    2014-03-27

    intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File

  11. Improving Energy Efficiency in CNC Machining

    NASA Astrophysics Data System (ADS)

    Pavanaskar, Sushrut S.

    We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.

  12. Performance of the fusion code GYRO on four generations of Cray computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahey, Mark R

    2014-01-01

    GYRO is a code used for the direct numerical simulation of plasma microturbulence. It has been ported to a variety of modern MPP platforms including several modern commodity clusters, IBM SPs, and Cray XC, XT, and XE series machines. We briefly describe the mathematical structure of the equations, the data layout, and the redistribution scheme. Also, while the performance and scaling of GYRO on many of these systems has been shown before, here we show the comparative performance and scaling on four generations of Cray supercomputers including the newest addition - the Cray XC30. The more recently added hybrid OpenMP/MPImore » imple- mentation also shows a great deal of promise on custom HPC systems that utilize fast CPUs and proprietary interconnects. Four machines of varying sizes were used in the experiment, all of which are located at the National Institute for Computational Sciences at the University of Tennessee at Knoxville and Oak Ridge National Laboratory. The advantages, limitations, and performance of using each system are discussed.« less

  13. 76 FR 81518 - Notice of Issuance of Final Determination Concerning Laser-Based Multi-Function Office Machines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-28

    ... Determination Concerning Laser-Based Multi-Function Office Machines AGENCY: U.S. Customs and Border Protection... country of origin of laser-based multi-function office machines. Based upon the facts presented, CBP has... essential character of the laser-based multi-function office machine, and it is at their assembly and...

  14. A method for using solid modeling CAD software to create an implant library for the fabrication of a custom abutment.

    PubMed

    Zhang, Jing; Zhang, Rimei; Ren, Guanghui; Zhang, Xiaojie

    2017-02-01

    This article describes a method that incorporates the solid modeling CAD software Solidworks with a dental milling machine to fabricate individual abutments in house. This process involves creating an implant library with 3-dimensional (3D) models and manufacturing a base, scan element, abutment, and crown anatomy. The 3D models can be imported into any dental computer-aided design and computer-aided (CAD-CAM) manufacturing system. This platform increases abutment design flexibility, as the base and scan elements can be designed to fit several shapes as needed to meet clinical requirements. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  16. Embedded systems for supporting computer accessibility.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  17. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  18. 12 CFR 12.102 - National bank use of electronic communications as customer notifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false National bank use of electronic communications... Interpretations § 12.102 National bank use of electronic communications as customer notifications. (a) In... 12.5 through electronic communications. Where a customer has a facsimile machine, a national bank may...

  19. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  20. ATLAS computing on CSCS HPC

    NASA Astrophysics Data System (ADS)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  1. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  2. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  3. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  4. VLSI processors for signal detection in SETI

    NASA Technical Reports Server (NTRS)

    Duluk, J. F.; Linscott, I. R.; Peterson, A. M.; Burr, J.; Ekroot, B.; Twicken, J.

    1989-01-01

    The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.

  5. VLSI processors for signal detection in SETI.

    PubMed

    Duluk, J F; Linscott, I R; Peterson, A M; Burr, J; Ekroot, B; Twicken, J

    1989-01-01

    The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.

  6. Application of the rapid prototyping technique to design a customized temporomandibular joint used to treat temporomandibular ankylosis

    PubMed Central

    Chaware, Suresh M.; Bagaria, Vaibhav; Kuthe, Abhay

    2009-01-01

    Anthropometric variations in humans make it difficult to replace a temporomandibular joint (TMJ), successfully using a standard “one-size-fits-all” prosthesis. The case report presents a unique concept of total TMJ replacement with customized and modified TMJ prosthesis, which is cost-effective and provides the best fit for the patient. The process involved in designing and modifications over the existing prosthesis are also described. A 12-year- old female who presented for treatment of left unilateral TMJ ankylosis underwent the surgery for total TMJ replacement. A three-dimensional computed tomography (CT) scan suggested features of bony ankylosis of left TMJ. CT images were converted to a sterolithographic model using CAD software and a rapid prototyping machine. A process of rapid manufacturing was then used to manufacture the customized prosthesis. Postoperative recovery was uneventful, with an improvement in mouth opening of 3.5 cm and painless jaw movements. Three years postsurgery, the patient is pain-free, has a mouth opening of about 4.0 cm and enjoys a normal diet. The postoperative radiographs concur with the excellent clinical results. The use of CAD/CAM technique to design the custom-made prosthesis, using orthopaedically proven structural materials, significantly improves the predictability and success rates of TMJ replacement surgery. PMID:19881026

  7. 26 CFR 1.864-4 - U.S. source income effectively connected with U.S. business.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... customers in the United States for the machine tools manufactured by that corporation. All negotiations with... the conduct of a business in the United States by M. Occasionally, during 1968 the customers in the... customers without routing the transactions through its branch office in the United States. The income or...

  8. AIRE-Linux

    NASA Astrophysics Data System (ADS)

    Zhou, Jianfeng; Xu, Benda; Peng, Chuan; Yang, Yang; Huo, Zhuoxi

    2015-08-01

    AIRE-Linux is a dedicated Linux system for astronomers. Modern astronomy faces two big challenges: massive observed raw data which covers the whole electromagnetic spectrum, and overmuch professional data processing skill which exceeds personal or even a small team's abilities. AIRE-Linux, which is a specially designed Linux and will be distributed to users by Virtual Machine (VM) images in Open Virtualization Format (OVF), is to help astronomers confront the challenges. Most astronomical software packages, such as IRAF, MIDAS, CASA, Heasoft etc., will be integrated into AIRE-Linux. It is easy for astronomers to configure and customize the system and use what they just need. When incorporated into cloud computing platforms, AIRE-Linux will be able to handle data intensive and computing consuming tasks for astronomers. Currently, a Beta version of AIRE-Linux is ready for download and testing.

  9. Computer-assisted generation of individual training concepts for advanced education in manufacturing metrology

    NASA Astrophysics Data System (ADS)

    Werner, Teresa; Weckenmann, Albert

    2010-05-01

    Due to increasing requirements on the accuracy and reproducibility of measurement results together with a rapid development of novel technologies for the execution of measurements, there is a high demand for adequately qualified metrologists. Accordingly, a variety of training offers are provided by machine manufacturers, universities and other institutions. Yet, for an interested learner it is very difficult to define an optimal training schedule for his/her individual demands. Therefore, a computer-based assistance tool is developed to support a demand-responsive scheduling of training. Based on the difference between the actual and intended competence profile and under consideration of amending requirements, an optimally customized qualification concept is derived. For this, available training offers are categorized according to different dimensions: regarding contents of the course, but also intended target groups, focus of the imparted competences, implemented methods of learning and teaching, expected constraints for learning and necessary preknowledge. After completing a course, the achieved competences and the transferability of gathered knowledge are evaluated. Based on the results, recommendations for amending measures of learning are provided. Thus, a customized qualification for manufacturing metrology is facilitated, adapted to the specific needs and constraints of each individual learner.

  10. Initial experience with custom-fit total knee replacement: intra-operative events and long-leg coronal alignment.

    PubMed

    Spencer, Brian A; Mont, Michael A; McGrath, Mike S; Boyd, Bradley; Mitrick, Michael F

    2009-12-01

    New technology using magnetic resonance imaging (MRI) allows the surgeon to place total knee replacement components into each patient's pre-arthritic natural alignment. This study evaluated the initial intra-operative experience using this technique. Twenty-one patients had a sagittal MRI of their arthritic knee to determine component placement for a total knee replacement. Cutting guides were machined to control all intra-operative cuts. Intra-operative events were recorded and these knees were compared to a matching cohort of the senior surgeon's previous 30 conventional total knee replacements. Post-operative scanograms were obtained from each patient and coronal alignment was compared to previous studies using conventional and computer-assisted techniques. There were no intra-operative or acute post-operative complications. There were no differences in blood loss and there was a mean decrease in operative time of 14% compared to a cohort of patients with conventional knee replacements. The average deviation from the mechanical axis was 1.2 degrees of varus, which was comparable to previously reported conventional and computer-assisted techniques. Custom-fit total knee replacement appeared to be a safe procedure for uncomplicated cases of osteoarthritis.

  11. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  12. Speed-Selector Guard For Machine Tool

    NASA Technical Reports Server (NTRS)

    Shakhshir, Roda J.; Valentine, Richard L.

    1992-01-01

    Simple guardplate prevents accidental reversal of direction of rotation or sudden change of speed of lathe, milling machine, or other machine tool. Custom-made for specific machine and control settings. Allows control lever to be placed at only one setting. Operator uses handle to slide guard to engage or disengage control lever. Protects personnel from injury and equipment from damage occurring if speed- or direction-control lever inadvertently placed in wrong position.

  13. Design of a hydraulic bending machine

    Treesearch

    Steven G. Hankel; Marshall Begel

    2004-01-01

    To keep pace with customer demands while phasing out old and unserviceable test equipment, the staff of the Engineering Mechanics Laboratory (EML) at the USDA Forest Service, Forest Products Laboratory, designed and assembled a hydraulic bending test machine. The EML built this machine to test dimension lumber, nominal 2 in. thick and up to 12 in. deep, at spans up to...

  14. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  15. Cranioplasty Enhanced by Three-Dimensional Printing: Custom-Made Three-Dimensional-Printed Titanium Implants for Skull Defects.

    PubMed

    Park, Eun-Kyung; Lim, Jun-Young; Yun, In-Sik; Kim, Ju-Seong; Woo, Su-Heon; Kim, Dong-Seok; Shim, Kyu-Won

    2016-06-01

    The authors studied to demonstrate the efficacy of custom-made three-dimensional (3D)-printed titanium implants for reconstructing skull defects. From 2013 to 2015, 21 patients (8-62 years old, mean = 28.6-year old; 11 females and 10 males) with skull defects were treated. Total disease duration ranged from 6 to 168 months (mean = 33.6 months). The size of skull defects ranged from 84 × 104 to 154 × 193 mm. Custom-made implants were manufactured by Medyssey Co, Ltd (Jecheon, South Korea) using 3D computed tomography data, Mimics software, and an electron beam melting machine. The team reviewed several different designs and simulated surgery using a 3D skull model. During the operation, the implant was fit to the defect without dead space. Operation times ranged from 85 to 180 minutes (mean = 115.7 minutes). Operative sites healed without any complications except for 1 patient who had red swelling with exudation at the skin defect, which was a skin infection and defect at the center of the scalp flap reoccurring since the initial head injury. This patient underwent reoperation for skin defect revision and replacement of the implant. Twenty-one patients were followed for 6 to 24 months (mean = 14.1 months). The patients were satisfied and had no recurrent wound problems. Head computed tomography after operation showed good fixation of titanium implants and satisfactory skull-shape symmetry. For the reconstruction of skull defects, the use of autologous bone grafts has been the treatment of choice. However, bone use depends on availability, defect size, and donor morbidity. As 3D printing techniques are further advanced, it is becoming possible to manufacture custom-made 3D titanium implants for skull reconstruction.

  16. Configuration Management and Infrastructure Monitoring Using CFEngine and Icinga for Real-time Heterogeneous Data Taking Environment

    NASA Astrophysics Data System (ADS)

    Poat, M. D.; Lauret, J.; Betts, W.

    2015-12-01

    The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.

  17. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  18. CS651 Computer Systems Security Foundations 3d Imagination Cyber Security Management Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Roy S.

    3d Imagination is a new company that bases its business on selling and improving 3d open source related hardware. The devices that they sell include 3d imagers, 3d printers, pick and place machines and laser etchers. They have a fast company intranet for ease in sharing, storing and printing large, complex 3d designs. They have an employee set that requires a variety of operating systems including Windows, Mac and a variety of Linux both for running business services as well as design and test machines. There are a wide variety of private networks for testing transfer rates to and frommore » the 3d devices, without interference with other network tra c. They do video conferencing conferencing with customers and other designers. One of their machines is based on the project found at delta.firepick.org(Krassenstein, 2014; Biggs, 2014), which in future, will perform most of those functions. Their devices all include embedded systems, that may have full blown operating systems. Most of their systems are designed to have swappable parts, so when a new technology is born, it can be quickly adopted by people with 3d Imagination hardware. This company is producing a fair number of systems and components, however to get the funding they need to mass produce quality parts, so they are preparing for an IPO to raise the funds they need. They would like to have a cyber-security audit performed so they can give their investors con dence that they are protecting their data, customers information and printers in a proactive manner.« less

  19. Cheaper Custom Shielding Cups For Arc Welding

    NASA Technical Reports Server (NTRS)

    Morgan, Gene E.

    1992-01-01

    New way of making special-purpose shielding cups for gas/tungsten arc welding from hobby ceramic greatly reduces cost. Pattern machined in plastic. Plaster-of-paris mold made, and liquid ceramic poured into mold. Cost 90 percent less than cup machined from lava rock.

  20. 27 CFR 479.117 - Action by Customs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2011-04-01 2010-04-01 true Action by Customs. 479.117 Section 479.117 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN...

  1. 27 CFR 479.117 - Action by Customs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2014-04-01 2014-04-01 false Action by Customs. 479.117 Section 479.117 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN...

  2. 27 CFR 479.117 - Action by Customs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2012-04-01 2010-04-01 true Action by Customs. 479.117 Section 479.117 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN...

  3. 27 CFR 479.117 - Action by Customs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2010-04-01 2010-04-01 false Action by Customs. 479.117 Section 479.117 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN...

  4. 27 CFR 479.117 - Action by Customs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 3 2013-04-01 2013-04-01 false Action by Customs. 479.117 Section 479.117 Alcohol, Tobacco Products, and Firearms BUREAU OF ALCOHOL, TOBACCO, FIREARMS, AND EXPLOSIVES, DEPARTMENT OF JUSTICE FIREARMS AND AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN...

  5. 47 CFR 76.309 - Customer service obligations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... representatives will be available to respond to customer telephone inquiries during normal business hours. (B) After normal business hours, the access line may be answered by a service or an automated response system, including an answering machine. Inquiries received after normal business hours must be responded...

  6. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  7. In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Huang, Yan

    In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.

  8. Decentralized real-time simulation of forest machines

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  9. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  10. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  11. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  12. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  13. 14 CFR § 1214.801 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  14. Periodical capacity setting methods for make-to-order multi-machine production systems

    PubMed Central

    Altendorfer, Klaus; Hübl, Alexander; Jodlbauer, Herbert

    2014-01-01

    The paper presents different periodical capacity setting methods for make-to-order, multi-machine production systems with stochastic customer required lead times and stochastic processing times to improve service level and tardiness. These methods are developed as decision support when capacity flexibility exists, such as, a certain range of possible working hours a week for example. The methods differ in the amount of information used whereby all are based on the cumulated capacity demand at each machine. In a simulation study the methods’ impact on service level and tardiness is compared to a constant provided capacity for a single and a multi-machine setting. It is shown that the tested capacity setting methods can lead to an increase in service level and a decrease in average tardiness in comparison to a constant provided capacity. The methods using information on processing time and customer required lead time distribution perform best. The results found in this paper can help practitioners to make efficient use of their flexible capacity. PMID:27226649

  15. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Treesearch

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  16. 27. Bollinger twinchain tandem, pigcasting machine, located at the north ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. Bollinger twin-chain tandem, pig-casting machine, located at the north end of the plant. Prior to closing, approximately 40 percent of the plant's: iron production was cast into pigs and sold to foundry customers. The pig-casting machine employed a controller, lime man, trough man, and crane operator. - Central Furnaces, 2650 Broadway, east bank of Cuyahoga River, Cleveland, Cuyahoga County, OH

  17. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  18. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  19. A deep semantic mobile application for thyroid cytopathology

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Corte-Real, Miguel; Baloch, Zubair

    2016-03-01

    Cytopathology is the study of disease at the cellular level and often used as a screening tool for cancer. Thyroid cytopathology is a branch of pathology that studies the diagnosis of thyroid lesions and diseases. A pathologist views cell images that may have high visual variance due to different anatomical structures and pathological characteristics. To assist the physician with identifying and searching through images, we propose a deep semantic mobile application. Our work augments recent advances in the digitization of pathology and machine learning techniques, where there are transformative opportunities for computers to assist pathologists. Our system uses a custom thyroid ontology that can be augmented with multimedia metadata extracted from images using deep machine learning techniques. We describe the utilization of a particular methodology, deep convolutional neural networks, to the application of cytopathology classification. Our method is able to leverage networks that have been trained on millions of generic images, to medical scenarios where only hundreds or thousands of images exist. We demonstrate the benefits of our framework through both quantitative and qualitative results.

  20. 27 CFR 46.166 - Dealing in tobacco products.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... packages, provided the products remain in the packages until removed by the customer or in the presence of the customer. Where a vending machine is used, tobacco products must similarly be vended in proper... consumption in the United States unless such articles are removed from their export packaging and repackaged...

  1. NDE and SHM Simulation for CFRP Composites

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Parker, F. Raymond

    2014-01-01

    Ultrasound-based nondestructive evaluation (NDE) is a common technique for damage detection in composite materials. There is a need for advanced NDE that goes beyond damage detection to damage quantification and characterization in order to enable data driven prognostics. The damage types that exist in carbon fiber-reinforced polymer (CFRP) composites include microcracking and delaminations, and can be initiated and grown via impact forces (due to ground vehicles, tool drops, bird strikes, etc), fatigue, and extreme environmental changes. X-ray microfocus computed tomography data, among other methods, have shown that these damage types often result in voids/discontinuities of a complex volumetric shape. The specific damage geometry and location within ply layers affect damage growth. Realistic threedimensional NDE and structural health monitoring (SHM) simulations can aid in the development and optimization of damage quantification and characterization techniques. This paper is an overview of ongoing work towards realistic NDE and SHM simulation tools for composites, and also discusses NASA's need for such simulation tools in aeronautics and spaceflight. The paper describes the development and implementation of a custom ultrasound simulation tool that is used to model ultrasonic wave interaction with realistic 3-dimensional damage in CFRP composites. The custom code uses elastodynamic finite integration technique and is parallelized to run efficiently on computing cluster or multicore machines.

  2. Computer Technology for Industry

    NASA Technical Reports Server (NTRS)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  3. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  4. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  5. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  6. Integrated Multi-Scale Data Analytics and Machine Learning for the Distribution Grid and Building-to-Grid Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma M.; Hendrix, Val; Chertkov, Michael

    This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less

  7. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    PubMed

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  8. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  9. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  10. Advancements in Binder Systems for Solid Freeform Fabrication

    NASA Technical Reports Server (NTRS)

    Cooper, Ken; Munafo, Paul (Technical Monitor)

    2002-01-01

    Paper will present recent developments in advanced material binder systems for solid freeform fabrication (SFF) technologies. The advantage of SFF is the capability to custom fabricate complex geometries directly from computer aided design data in layer- by-layer fashion, eliminated the need for traditional fixturing and tooling. Binders allow for the low temperature processing of 'green' structural materials, either metal, ceramic or composite, in traditional rapid prototyping machines. The greatest obstacle comes when green parts must then go through a sintering or burnout process to remove the binders and fully densify the parent material, without damaging or distorting the original part geometry. Critical issues and up-to-date assessments will be delivered on various material systems.

  11. Design and construction of a cost-efficient Arduino-based mirror galvanometer system for scanning optical microscopy

    NASA Astrophysics Data System (ADS)

    Hsu, Jen-Feng; Dhingra, Shonali; D'Urso, Brian

    2017-01-01

    Mirror galvanometer systems (galvos) are commonly employed in research and commercial applications in areas involving laser imaging, laser machining, laser-light shows, and others. Here, we present a robust, moderate-speed, and cost-efficient home-built galvo system. The mechanical part of this design consists of one mirror, which is tilted around two axes with multiple surface transducers. We demonstrate the ability of this galvo by scanning the mirror using a computer, via a custom driver circuit. The performance of the galvo, including scan range, noise, linearity, and scan speed, is characterized. As an application, we show that this galvo system can be used in a confocal scanning microscopy system.

  12. Vocational Home Economics Education. Custom Sewing.

    ERIC Educational Resources Information Center

    Halmes, Ellen; Truitt, Debbie

    This curriculum guide for those who desire to make a full- or part-time career of custom sewing is designed with the domestic sewing machine in mind for the independent worker or small business. Intended for grades 11-12 consumer and homemaking students with two years of previous vocational home economics or students enrolled in occupational…

  13. Predictive modeling for corrective maintenance of imaging devices from machine logs.

    PubMed

    Patil, Ravindra B; Patil, Meru A; Ravi, Vidya; Naik, Sarif

    2017-07-01

    In the cost sensitive healthcare industry, an unplanned downtime of diagnostic and therapy imaging devices can be a burden on the financials of both the hospitals as well as the original equipment manufacturers (OEMs). In the current era of connectivity, it is easier to get these devices connected to a standard monitoring station. Once the system is connected, OEMs can monitor the health of these devices remotely and take corrective actions by providing preventive maintenance thereby avoiding major unplanned downtime. In this article, we present an overall methodology of predicting failure of these devices well before customer experiences it. We use data-driven approach based on machine learning to predict failures in turn resulting in reduced machine downtime, improved customer satisfaction and cost savings for the OEMs. One of the use-case of predicting component failure of PHILIPS iXR system is explained in this article.

  14. A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing

    NASA Astrophysics Data System (ADS)

    Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.

    2017-12-01

    Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.

  15. Computational work and time on finite machines.

    NASA Technical Reports Server (NTRS)

    Savage, J. E.

    1972-01-01

    Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.

  16. 75 FR 3253 - Lamb Assembly and Test, LLC, Subsidiary of Mag Industrial Automation Systems, Machesney Park, IL...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-20

    ..., LLC, Subsidiary of Mag Industrial Automation Systems, Machesney Park, IL; Notice of Negative... automation equipment and machine tools did not contribute to worker separations at the subject facility and...' firm's declining customers. The survey revealed no imports of automation equipment and machine tools by...

  17. State of the Art of Network Security Perspectives in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Oh, Tae Hwan; Lim, Shinyoung; Choi, Young B.; Park, Kwang-Roh; Lee, Heejo; Choi, Hyunsang

    Cloud computing is now regarded as one of social phenomenon that satisfy customers' needs. It is possible that the customers' needs and the primary principle of economy - gain maximum benefits from minimum investment - reflects realization of cloud computing. We are living in the connected society with flood of information and without connected computers to the Internet, our activities and work of daily living will be impossible. Cloud computing is able to provide customers with custom-tailored features of application software and user's environment based on the customer's needs by adopting on-demand outsourcing of computing resources through the Internet. It also provides cloud computing users with high-end computing power and expensive application software package, and accordingly the users will access their data and the application software where they are located at the remote system. As the cloud computing system is connected to the Internet, network security issues of cloud computing are considered as mandatory prior to real world service. In this paper, survey and issues on the network security in cloud computing are discussed from the perspective of real world service environments.

  18. Computer programing for geosciences: Teach your students how to make tools

    NASA Astrophysics Data System (ADS)

    Grapenthin, Ronni

    2011-12-01

    When I announced my intention to pursue a Ph.D. in geophysics, some people gave me confused looks, because I was working on a master's degree in computer science at the time. My friends, like many incoming geoscience graduate students, have trouble linking these two fields. From my perspective, it is pretty straightforward: Much of geoscience evolves around novel analyses of large data sets that require custom tools—computer programs—to minimize the drudgery of manual data handling; other disciplines share this characteristic. While most faculty adapted to the need for tool development quite naturally, as they grew up around computer terminal interfaces, incoming graduate students lack intuitive understanding of programing concepts such as generalization and automation. I believe the major cause is the intuitive graphical user interfaces of modern operating systems and applications, which isolate the user from all technical details. Generally, current curricula do not recognize this gap between user and machine. For students to operate effectively, they require specialized courses teaching them the skills they need to make tools that operate on particular data sets and solve their specific problems. Courses in computer science departments are aimed at a different audience and are of limited help.

  19. Topographic Maps from a Kiosk

    USGS Publications Warehouse

    ,

    2001-01-01

    In April 2000, the U.S. Geological Survey (USGS) and National Geographic (NG) TOPO entered into a cooperative research and development agreement (CRADA) to explore a new technology that would allow a person to walk into a map retail store and print a personalized topographic map, vending machine style, from a self-service kiosk. Work began to develop systems that offer seamless, digitally stored USGS topographic maps using map-on-demand software from NG TOPO. The vending machine approach ensures that maps are never out of stock, allows customers to define their own map boundaries, and gives customers choices regarding shaded relief and the grids to be printed on the maps to get the exact maps they need.

  20. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  1. Engineering specification and system design for CAD/CAM of custom shoes: UMC project effort

    NASA Technical Reports Server (NTRS)

    Bao, Han P.

    1990-01-01

    Further experimentations were made to improve the design and fabrication techniques of the integrated sole. The sole design is shown to be related to the foot position requirements and the actual shape of the foot including presence of neurotropic ulcers or other infections. Factors for consideration were: heel pitch, balance line, and rigidity conditions of the foot. Machining considerations were also part of the design problem. Among these considerations, widths of each contour, tool motion, tool feed rate, depths of cut, and slopes of cut at the boundary were the key elements. The essential fabrication techniques evolved around the idea of machining a mold then, using quick-firm latex material, casting the sole through the mold. Two main mold materials were experimented with: plaster and wood. Plaster was very easy to machine and shape but could barely support the pressure in the hydraulic press required by the casting process. Wood was found to be quite effective in terms of relative cost, strength, and surface smoothness except for the problem of cutting against the fibers which could generate ragged surfaces. The programming efforts to convert the original dBase programs into C programs so that they could be executed on the SUN Computer at North Carolina State University are discussed.

  2. Investigation of roughing machining simulation by using visual basic programming in NX CAM system

    NASA Astrophysics Data System (ADS)

    Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed

    2018-03-01

    This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.

  3. Quantum machine learning.

    PubMed

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  4. Quantum machine learning

    NASA Astrophysics Data System (ADS)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-01

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  5. Computer network defense system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves networkmore » connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.« less

  6. Towards On-Line Services Based on a Holistic Analysis of Human Activities

    NASA Technical Reports Server (NTRS)

    Clancey, William J.

    2004-01-01

    Very often computer scientists view computerization of services in terms of the logistics of human-machine interaction, including establishing a contract, accessing records, and of course designing an interface. But this analysis often moves too quickly to tactical details, failing to frame the entire service in human terms, and not recognizing the mutual learning required to define and relate goals, constraints, and the personalized value of available services. In particular, on-line services that "computerize communication" can be improved by constructing an activity model of what the person is trying to do, not just filtering, comparing, and selling piece-meal services. For example, from the customer s perspective the task of an on-line travel service is not merely to establish confirmed reservations, but to have a complete travel plan, usually integrating many days of transportation, lodging, and recreation into a happy experience. The task of the travel agent is not merely "ticketing", but helping the customer understand what they want and providing services that will connect everything together in an enjoyable way.

  7. Custom-made laser-welded titanium implant prosthetic abutment.

    PubMed

    Iglesia-Puig, Miguel A

    2005-10-01

    A technique to create an individually modified implant prosthetic abutment is described. An overcasting is waxed onto a machined titanium abutment, cast in titanium, and joined to it with laser welding. With the proposed technique, a custom-made titanium implant prosthetic abutment is created with adequate volume and contour of metal to support a screw-retained, metal-ceramic implant-supported crown.

  8. The role of soft computing in intelligent machines.

    PubMed

    de Silva, Clarence W

    2003-08-15

    An intelligent machine relies on computational intelligence in generating its intelligent behaviour. This requires a knowledge system in which representation and processing of knowledge are central functions. Approximation is a 'soft' concept, and the capability to approximate for the purposes of comparison, pattern recognition, reasoning, and decision making is a manifestation of intelligence. This paper examines the use of soft computing in intelligent machines. Soft computing is an important branch of computational intelligence, where fuzzy logic, probability theory, neural networks, and genetic algorithms are synergistically used to mimic the reasoning and decision making of a human. This paper explores several important characteristics and capabilities of machines that exhibit intelligent behaviour. Approaches that are useful in the development of an intelligent machine are introduced. The paper presents a general structure for an intelligent machine, giving particular emphasis to its primary components, such as sensors, actuators, controllers, and the communication backbone, and their interaction. The role of soft computing within the overall system is discussed. Common techniques and approaches that will be useful in the development of an intelligent machine are introduced, and the main steps in the development of an intelligent machine for practical use are given. An industrial machine, which employs the concepts of soft computing in its operation, is presented, and one aspect of intelligent tuning, which is incorporated into the machine, is illustrated.

  9. Information and psychomotor skills knowledge acquisition: A student-customer-centered and computer-supported approach.

    PubMed

    Nicholson, Anita; Tobin, Mary

    2006-01-01

    This presentation will discuss coupling commercial and customized computer-supported teaching aids to provide BSN nursing students with a friendly customer-centered self-study approach to psychomotor skill acquisition.

  10. Machinability of CAD-CAM materials.

    PubMed

    Chavali, Ramakiran; Nejat, Amir H; Lawson, Nathaniel C

    2017-08-01

    Although new materials are available for computer-aided design and computer-aided manufacturing (CAD-CAM) fabrication, limited information is available regarding their machinability. The depth of penetration of a milling tool into a material during a timed milling cycle may indicate its machinability. The purpose of this in vitro study was to compare the tool penetration rate for 2 polymer-containing CAD-CAM materials (Lava Ultimate and Enamic) and 2 ceramic-based CAD-CAM materials (e.max CAD and Celtra Duo). The materials were sectioned into 4-mm-thick specimens (n=5/material) and polished with 320-grit SiC paper. Each specimen was loaded into a custom milling apparatus. The apparatus pushed the specimens against a milling tool (E4D Tapered 2016000) rotating at 40 000 RPM with a constant force of 0.98 N. After a 6-minute timed milling cycle, the length of each milling cut was measured with image analysis software under a digital light microscope. Representative specimens and milling tools were examined with scanning electron microscopy (SEM) and energy dispersive x-ray spectroscopy. The penetration rate of Lava Ultimate (3.21 ±0.46 mm/min) and Enamic (2.53 ±0.57 mm/min) was significantly greater than that of e.max CAD (1.12 ±0.32 mm/min) or Celtra Duo (0.80 ±0.21 mm/min) materials. SEM observations showed little tool damage, regardless of material type. Residual material was found on the tools used with polymer-containing materials, and wear of the embedding medium was seen on the tools used with the ceramic-based materials. Edge chipping was noted on cuts made in the ceramic-based materials. Lava Ultimate and Enamic have greater machinability and less edge chipping than e.max CAD and Celtra Duo. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. SU-F-E-13: Design and Fabrication of Gynacological Brachytherapy Shielding & Non Shielding Applicators Using Indigenously Developed 3D Printing Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanmugam, S

    Purpose: In this innovative work we have developed Gynecological Brachytherapy shielding & Non Shielding Applicators and compared with the commercially available applicators by using the indigenously developed 3D Printing machine. Methods: We have successfully indigenously developed the 3D printing machine. Which contain the 3 dimensional motion platform, Heater unit, base plate, ect… To fabricate the Gynecological Brachytherapy shielding & non shielding applicators the 3D design were developed in the computer as virtual design. This virtual design is made in a CAD computer file using a 3D modeling program. Separate programme for the shielding & non shielding applicators. We have alsomore » provided the extra catheter insert provision in the applicator for the multiple catheter. The DICOM file of the applicator were then converted to stereo Lithography file for the 3D printer. The shielding & Non Shielding Applicators were printed on a indigenously developed 3D printer material. The same dimensions were used to develop the applicators in the acrylic material also for the comparative study. A CT scan was performed to establish an infill-density calibration curve as well as characterize the quality of the print such as uniformity and the infill pattern. To commission the process, basic CT and dose properties of the printing materials were measured in photon beams and compared against water and soft tissue. Applicator were then scanned to confirm the placement of multiple catheter position. Finally dose distributions with rescanned CTs were compared with those computer-generated applicators. Results: The doses measured from the ion Chamber and X-Omat film test were within 2%. The shielded applicator reduce the rectal dose comparatively with the non shielded applicator. Conclusion: As of submission 3 unique cylinders have been designed, printed, and tested dosimetrically. A standardizable workflow for commissioning custom 3D printed applicators was codified and will be reported.« less

  12. A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

    NASA Astrophysics Data System (ADS)

    Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir

    2015-05-01

    The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.

  13. Seismic waveform modeling over cloud

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  14. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  15. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    PubMed

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  16. Entanglement-Based Machine Learning on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W.

    2015-03-01

    Machine learning, a branch of artificial intelligence, learns from previous experience to optimize performance, which is ubiquitous in various fields such as computer sciences, financial analysis, robotics, and bioinformatics. A challenge is that machine learning with the rapidly growing "big data" could become intractable for classical computers. Recently, quantum machine learning algorithms [Lloyd, Mohseni, and Rebentrost, arXiv.1307.0411] were proposed which could offer an exponential speedup over classical algorithms. Here, we report the first experimental entanglement-based classification of two-, four-, and eight-dimensional vectors to different clusters using a small-scale photonic quantum computer, which are then used to implement supervised and unsupervised machine learning. The results demonstrate the working principle of using quantum computers to manipulate and classify high-dimensional vectors, the core mathematical routine in machine learning. The method can, in principle, be scaled to larger numbers of qubits, and may provide a new route to accelerate machine learning.

  17. Cosmic logic: a computational model

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  18. Electronic vending machines for dispensing rapid HIV self-testing kits: a case study.

    PubMed

    Young, Sean D; Klausner, Jeffrey; Fynn, Risa; Bolan, Robert

    2014-02-01

    This short report evaluates the feasibility of using electronic vending machines for dispensing oral, fluid, rapid HIV self-testing kits in Los Angeles County. Feasibility criteria that needed to be addressed were defined as: (1) ability to find a manufacturer who would allow dispensing of HIV testing kits and could fit them to the dimensions of a vending machine, (2) ability to identify and address potential initial obstacles, trade-offs in choosing a machine location, and (3) ability to gain community approval for implementing this approach in a community setting. To address these issues, we contracted a vending machine company who could supply a customized, Internet-enabled machine that could dispense HIV kits and partnered with a local health center available to host the machine onsite and provide counseling to participants, if needed. Vending machines appear to be feasible technologies that can be used to distribute HIV testing kits.

  19. Electronic vending machines for dispensing rapid HIV self-testing kits: A case study

    PubMed Central

    Young, Sean D.; Klausner, Jeffrey; Fynn, Risa; Bolan, Robert

    2014-01-01

    This short report evaluates the feasibility of using electronic vending machines for dispensing oral, fluid, rapid HIV-self testing kits in Los Angeles County. Feasibility criteria that needed to be addressed were defined as: 1) ability to find a manufacturer who would allow dispensing of HIV testing kits and could fit them to the dimensions of a vending machine, 2) ability to identify and address potential initial obstacles, trade-offs in choosing a machine location, and 3) ability to gain community approval for implementing this approach in a community setting. To address these issues, we contracted a vending machine company who could supply a customized, Internet-enabled machine that could dispense HIV kits and partnered with a local health center available to host the machine onsite and provide counseling to participants, if needed. Vending machines appear to be feasible technologies that can be used to distribute HIV testing kits. PMID:23777528

  20. Testing meta tagger

    DTIC Science & Technology

    2017-12-21

    rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved

  1. Confabulation Based Sentence Completion for Machine Reading

    DTIC Science & Technology

    2010-11-01

    making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the...thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics...University Press, 1992. [2] H. Motoda and K. Yoshida, “Machine learning techniques to make computers easier to use,” Proceedings of the Fifteenth

  2. Using Microcomputers in Vocational Education to Teach Needed Skills in Machine Shop and Related Occupations. Final Report.

    ERIC Educational Resources Information Center

    Mercer County Schools, Princeton, WV.

    A project was undertaken to identify machine shop occupations requiring workers to use computers, identify the computer skills needed to perform machine shop tasks, and determine which software products are currently being used in machine shop programs. A search of the Dictionary of Occupational Titles revealed that computer skills will become…

  3. Engineering specification and system design for CAD/CAM of custom shoes. Phase 5: UMC involvement (January 1, 1989 - June 30, 1989)

    NASA Technical Reports Server (NTRS)

    Bao, Han P.

    1989-01-01

    The CAD/CAM of custom shoes is discussed. The solid object for machining is represented by a wireframe model with its nodes or vertices specified systematically in a grid pattern covering its entire length (point-to-point configuration). Two sets of data from CENCIT and CYBERWARE were used for machining purposes. It was found that the indexing technique (turning the stock by a small angle then moving the tool on a longitudinal path along the foot) yields the best result in terms of ease of programming, savings in wear and tear of the machine and cutting tools, and resolution of fine surface details. The work done using the LASTMOD last design system results in a shoe last specified by a number of congruent surface patches of different sizes. This data format was converted into a form amenable to the machine tool. It involves a series of sorting algorithms and interpolation algorithms to provide the grid pattern that the machine tool needs as was the case in the point to point configuration discussed above. This report also contains an in-depth treatment of the design and production technique of an integrated sole to complement the task of design and manufacture of the shoe last. Clinical data and essential production parameters are discussed. Examples of soles made through this process are given.

  4. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  5. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  6. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  7. Computer program BL2D for solving two-dimensional and axisymmetric boundary layers

    NASA Technical Reports Server (NTRS)

    Iyer, Venkit

    1995-01-01

    This report presents the formulation, validation, and user's manual for the computer program BL2D. The program is a fourth-order-accurate solution scheme for solving two-dimensional or axisymmetric boundary layers in speed regimes that range from low subsonic to hypersonic Mach numbers. A basic implementation of the transition zone and turbulence modeling is also included. The code is a result of many improvements made to the program VGBLP, which is described in NASA TM-83207 (February 1982), and can effectively supersede it. The code BL2D is designed to be modular, user-friendly, and portable to any machine with a standard fortran77 compiler. The report contains the new formulation adopted and the details of its implementation. Five validation cases are presented. A detailed user's manual with the input format description and instructions for running the code is included. Adequate information is presented in the report to enable the user to modify or customize the code for specific applications.

  8. Advanced Resistive Exercise Device

    NASA Technical Reports Server (NTRS)

    Raboin, Jasen; Niebuhr, Jason; Cruz, Santana; Lamoreaux, chris

    2007-01-01

    The advanced resistive exercise device (ARED), now at the prototype stage of development, is a versatile machine that can be used to perform different customized exercises for which, heretofore, it has been necessary to use different machines. Conceived as a means of helping astronauts and others to maintain muscle and bone strength and endurance in low-gravity environments, the ARED could also prove advantageous in terrestrial settings (e.g., health clubs and military training facilities) in which many users are exercising simultaneously and there is heavy demand for use of exercise machines.

  9. Application of computer graphics in the design of custom orthopedic implants.

    PubMed

    Bechtold, J E

    1986-10-01

    Implementation of newly developed computer modelling techniques and computer graphics displays and software have greatly aided the orthopedic design engineer and physician in creating a custom implant with good anatomic conformity in a short turnaround time. Further advances in computerized design and manufacturing will continue to simplify the development of custom prostheses and enlarge their niche in the joint replacement market.

  10. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  11. Computer Programmed Milling Machine Operations. High-Technology Training Module.

    ERIC Educational Resources Information Center

    Leonard, Dennis

    This learning module for a high school metals and manufacturing course is designed to introduce the concept of computer-assisted machining (CAM). Through it, students learn how to set up and put data into the controller to machine a part. They also become familiar with computer-aided manufacturing and learn the advantages of computer numerical…

  12. Lymphangiogram

    MedlinePlus

    ... type of x-ray machine, called a fluoroscope, projects the images on a TV monitor. The provider ... commercial use must be authorized in writing by ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer ...

  13. A novel toolpath force prediction algorithm using CAM volumetric data for optimizing robotic arthroplasty.

    PubMed

    Kianmajd, Babak; Carter, David; Soshi, Masakazu

    2016-10-01

    Robotic total hip arthroplasty is a procedure in which milling operations are performed on the femur to remove material for the insertion of a prosthetic implant. The robot performs the milling operation by following a sequential list of tool motions, also known as a toolpath, generated by a computer-aided manufacturing (CAM) software. The purpose of this paper is to explain a new toolpath force prediction algorithm that predicts cutting forces, which results in improving the quality and safety of surgical systems. With a custom macro developed in the CAM system's native application programming interface, cutting contact patch volume was extracted from CAM simulations. A time domain cutting force model was then developed through the use of a cutting force prediction algorithm. The second portion validated the algorithm by machining a hip canal in simulated bone using a CNC machine. Average cutting forces were measured during machining using a dynamometer and compared to the values predicted from CAM simulation data using the proposed method. The results showed the predicted forces matched the measured forces in both magnitude and overall pattern shape. However, due to inconsistent motion control, the time duration of the forces was slightly distorted. Nevertheless, the algorithm effectively predicted the forces throughout an entire hip canal procedure. This method provides a fast and easy technique for predicting cutting forces during orthopedic milling by utilizing data within a CAM software.

  14. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  15. Atomicrex—a general purpose tool for the construction of atomic interaction models

    NASA Astrophysics Data System (ADS)

    Stukowski, Alexander; Fransson, Erik; Mock, Markus; Erhart, Paul

    2017-07-01

    We introduce atomicrex, an open-source code for constructing interatomic potentials as well as more general types of atomic-scale models. Such effective models are required to simulate extended materials structures comprising many thousands of atoms or more, because electronic structure methods become computationally too expensive at this scale. atomicrex covers a wide range of interatomic potential types and fulfills many needs in atomistic model development. As inputs, it supports experimental property values as well as ab initio energies and forces, to which models can be fitted using various optimization algorithms. The open architecture of atomicrex allows it to be used in custom model development scenarios beyond classical interatomic potentials while thanks to its Python interface it can be readily integrated e.g., with electronic structure calculations or machine learning algorithms.

  16. Method and system for rendering and interacting with an adaptable computing environment

    DOEpatents

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  17. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  18. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  19. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  20. Amp: A modular approach to machine learning in atomistic simulations

    NASA Astrophysics Data System (ADS)

    Khorshidi, Alireza; Peterson, Andrew A.

    2016-10-01

    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which makes it compatible with a wide variety of commercial and open-source electronic structure codes. We finally demonstrate that the neural network model inside Amp can accurately interpolate electronic structure energies as well as forces of thousands of multi-species atomic systems.

  1. Review, Selection and Installation of a Rapid Prototype Machine

    NASA Technical Reports Server (NTRS)

    McEndree, Caryl

    2008-01-01

    The objective of this paper is to impress upon the reader the benefits and advantages of investing in rapid prototyping (additive manufacturing) technology thru the procurement of one or two new rapid prototyping machines and the creation of a new Prototype and Model Lab at the Kennedy Space Center (KSC). This new resource will be available to all of United Space Alliance, LLC (USA), enabling engineers from around the company to pursue a more effective means of communication and design with our co-workers, and our customer, the National Aeronautics and Space Administration (NASA). The Rapid Protoyping/3D printing industry mirrors the transition the CAD industry made several years ago, when companies were trying to justify the expenditure of converting to a 3D based system from a 2D based system. The advantages of using a 3D system seemed to be outweighed by the cost it would take to convert not only legacy 2D drawings into 3D models but the training of personnel to use the 3D CAD software. But the reality was that when a 3D CAD system is employed, it gives engineers a much greater ability to conceive new designs and the ability to engineer new tools and products much more effectively. Rapid Prototyping (RP) is the name given to a host of related technologies that are used to fabricate physical objects directly from Computer Aided Design (CAD) data sources. These methods are generally similar to each other in that they add and bond materials in a layer wise-fashion to form objects, instead of machining away material. The machines used in Rapid Prototyping are also sometimes referred to as Rapid Manufacturing machines due to the fact that some of the parts fabricated in a RP machine can be used as the finished product. The name "Rapid Prototyping" is really a misnomer. It is much more than prototypes and it is not always rapid.

  2. The use of holographic and diffractive optics for optimized machine vision illumination for critical dimension inspection

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.; Ohar, Orest

    2004-02-01

    Illuminators used in machine vision applications typically produce non-uniform illumination onto the targeted surface being observed, causing a variety of problems with machine vision alignment or measurement. In most circumstances the light source is broad spectrum, leading to further problems with image quality when viewed through a CCD camera. Configured with a simple light bulb and a mirrored reflector and/or frosted glass plates, these general illuminators are appropriate for only macro applications. Over the last 5 years newer illuminators have hit the market including circular or rectangular arrays of high intensity light emitting diodes. These diode arrays are used to create monochromatic flood illumination of a surface that is to be inspected. The problem with these illumination techniques is that most of the light does not illuminate the desired areas, but broadly spreads across the surface, or when integrated with diffuser elements, tend to create similar shadowing effects to the broad spectrum light sources. In many cases a user will try to increase the performance of these illuminators by adding several of these assemblies together, increasing the intensity or by moving the illumination source closer or farther from the surface being inspected. In this case these non-uniform techniques can lead to machine vision errors, where the computer machine vision may read false information, such as interpreting non-uniform lighting or shadowing effects as defects. This paper will cover a technique involving the use of holographic / diffractive hybrid optical elements that are integrated into standard and customized light sources used in the machine vision industry. The bulk of the paper will describe the function and fabrication of the holographic/diffractive optics and how they can be tailored to improve illuminator design. Further information will be provided a specific design and examples of it in operation will be disclosed.

  3. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    NASA Astrophysics Data System (ADS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

  4. 19 CFR 152.106 - Computed value.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... will not be added to the other elements as it is not intended that any component of computed value be... 19 Customs Duties 2 2010-04-01 2010-04-01 false Computed value. 152.106 Section 152.106 Customs... (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Valuation of Merchandise § 152.106 Computed value...

  5. 19 CFR 152.106 - Computed value.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... will not be added to the other elements as it is not intended that any component of computed value be... 19 Customs Duties 2 2013-04-01 2013-04-01 false Computed value. 152.106 Section 152.106 Customs... (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Valuation of Merchandise § 152.106 Computed value...

  6. 19 CFR 152.106 - Computed value.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... will not be added to the other elements as it is not intended that any component of computed value be... 19 Customs Duties 2 2012-04-01 2012-04-01 false Computed value. 152.106 Section 152.106 Customs... (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Valuation of Merchandise § 152.106 Computed value...

  7. 19 CFR 152.106 - Computed value.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... will not be added to the other elements as it is not intended that any component of computed value be... 19 Customs Duties 2 2014-04-01 2014-04-01 false Computed value. 152.106 Section 152.106 Customs... (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Valuation of Merchandise § 152.106 Computed value...

  8. 19 CFR 152.106 - Computed value.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... will not be added to the other elements as it is not intended that any component of computed value be... 19 Customs Duties 2 2011-04-01 2011-04-01 false Computed value. 152.106 Section 152.106 Customs... (CONTINUED) CLASSIFICATION AND APPRAISEMENT OF MERCHANDISE Valuation of Merchandise § 152.106 Computed value...

  9. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  10. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  11. 19 CFR 141.88 - Computed value.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Computed value. 141.88 Section 141.88 Customs... (CONTINUED) ENTRY OF MERCHANDISE Invoices § 141.88 Computed value. When the port director determines that information as to computed value is necessary in the appraisement of any class or kind of merchandise, he...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanchurin, Vitaly, E-mail: vvanchur@d.umn.edu

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly,more » CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.« less

  13. Compact cold stage for micro-computerized tomography imaging of chilled or frozen samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hullar, Ted; Anastasio, Cort, E-mail: canastasio@ucdavis.edu; Paige, David F.

    2014-04-15

    High resolution X-ray microCT (computerized tomography) can be used to image a variety of objects, including temperature-sensitive materials. In cases where the sample must be chilled or frozen to maintain sample integrity, either the microCT machine itself must be placed in a refrigerated chamber, or a relatively expensive commercial cold stage must be purchased. We describe here the design and construction of a low-cost custom cold stage suitable for use in a microCT imaging system. Our device uses a boron nitride sample holder, two-stage Peltier cooler, fan-cooled heat sink, and electronic controller to maintain sample temperatures as low as −25 °Cmore » ± 0.2 °C for the duration of a tomography acquisition. The design does not require modification to the microCT machine, and is easily installed and removed. Our custom cold stage represents a cost-effective solution for refrigerating CT samples for imaging, and is especially useful for shared equipment or machines unsuitable for cold room use.« less

  14. Customer Churn Prediction for Broadband Internet Services

    NASA Astrophysics Data System (ADS)

    Huang, B. Q.; Kechadi, M.-T.; Buckley, B.

    Although churn prediction has been an area of research in the voice branch of telecommunications services, more focused studies on the huge growth area of Broadband Internet services are limited. Therefore, this paper presents a new set of features for broadband Internet customer churn prediction, based on Henley segments, the broadband usage, dial types, the spend of dial-up, line-information, bill and payment information, account information. Then the four prediction techniques (Logistic Regressions, Decision Trees, Multilayer Perceptron Neural Networks and Support Vector Machines) are applied in customer churn, based on the new features. Finally, the evaluation of new features and a comparative analysis of the predictors are made for broadband customer churn prediction. The experimental results show that the new features with these four modelling techniques are efficient for customer churn prediction in the broadband service field.

  15. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    PubMed

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.

  16. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  17. Utilization of thermoluminescent dosimetry in total skin electron beam radiotherapy of mycosis fungoides.

    PubMed

    Antolak, J A; Cundiff, J H; Ha, C S

    1998-01-01

    The purpose of this report is to discuss the utilization of thermoluminescent dosimetry (TLD) in total skin electron beam (TSEB) radiotherapy to: (a) compare patient dose distributions for similar techniques on different machines, (b) confirm beam calibration and monitor unit calculations, (c) provide data for making clinical decisions, and (d) study reasons for variations in individual dose readings. We report dosimetric results for 72 cases of mycosis fungoides, using similar irradiation techniques on two different linear accelerators. All patients were treated using a modified Stanford 6-field technique. In vivo TLD was done on all patients, and the data for all patients treated on both machines was collected into a database for analysis. Means and standard deviations (SDs) were computed for all locations. Scatter plots of doses vs. height, weight, and obesity index were generated, and correlation coefficients with these variables were computed. The TLD results show that our current TSEB implementation is dosimetrically equivalent to the previous implementation, and that our beam calibration technique and monitor unit calculation is accurate. Correlations with obesity index were significant at several sites. Individual TLD results allow us to customize the boost treatment for each patient, in addition to revealing patient positioning problems and/or systematic variations in dose caused by patient variability. The data agree well with previously published TLD results for similar TSEB techniques. TLD is an important part of the treatment planning and quality assurance programs for TSEB, and routine use of TLD measurements for TSEB is recommended.

  18. Computational Wear Simulation of Patellofemoral Articular Cartilage during In Vitro Testing

    PubMed Central

    Li, Lingmin; Patil, Shantanu; Steklov, Nick; Bae, Won; Temple-Wong, Michele; D'Lima, Darryl D.; Sah, Robert L.; Fregly, Benjamin J.

    2011-01-01

    Though changes in normal joint motions and loads (e.g., following anterior cruciate ligament injury) contribute to the development of knee osteoarthritis, the precise mechanism by which these changes induce osteoarthritis remains unknown. As a first step toward identifying this mechanism, this study evaluates computational wear simulations of a patellofemoral joint specimen wear tested on a knee simulator machine. A multi-body dynamic model of the specimen mounted in the simulator machine was constructed in commercial computer-aided engineering software. A custom elastic foundation contact model was used to calculate contact pressures and wear on the femoral and patellar articular surfaces using geometry created from laser scan and MR data. Two different wear simulation approaches were investigated – one that wore the surface geometries gradually over a sequence of 10 one-cycle dynamic simulations (termed the “progressive” approach), and one that wore the surface geometries abruptly using results from a single one-cycle dynamic simulation (termed the “non-progressive” approach). The progressive approach with laser scan geometry reproduced the experimentally measured wear depths and areas for both the femur and patella. The less costly non-progressive approach predicted deeper wear depths, especially on the patella, but had little influence on predicted wear areas. Use of MR data for creating the articular and subchondral bone geometry altered wear depth and area predictions by at most 13%. These results suggest that MR-derived geometry may be sufficient for simulating articular cartilage wear in vivo and that a progressive simulation approach may be needed for the patella and tibia since both remain in continuous contact with the femur. PMID:21453922

  19. Computational wear simulation of patellofemoral articular cartilage during in vitro testing.

    PubMed

    Li, Lingmin; Patil, Shantanu; Steklov, Nick; Bae, Won; Temple-Wong, Michele; D'Lima, Darryl D; Sah, Robert L; Fregly, Benjamin J

    2011-05-17

    Though changes in normal joint motions and loads (e.g., following anterior cruciate ligament injury) contribute to the development of knee osteoarthritis, the precise mechanism by which these changes induce osteoarthritis remains unknown. As a first step toward identifying this mechanism, this study evaluates computational wear simulations of a patellofemoral joint specimen wear tested on a knee simulator machine. A multibody dynamic model of the specimen mounted in the simulator machine was constructed in commercial computer-aided engineering software. A custom elastic foundation contact model was used to calculate contact pressures and wear on the femoral and patellar articular surfaces using geometry created from laser scan and MR data. Two different wear simulation approaches were investigated--one that wore the surface geometries gradually over a sequence of 10 one-cycle dynamic simulations (termed the "progressive" approach), and one that wore the surface geometries abruptly using results from a single one-cycle dynamic simulation (termed the "non-progressive" approach). The progressive approach with laser scan geometry reproduced the experimentally measured wear depths and areas for both the femur and patella. The less costly non-progressive approach predicted deeper wear depths, especially on the patella, but had little influence on predicted wear areas. Use of MR data for creating the articular and subchondral bone geometry altered wear depth and area predictions by at most 13%. These results suggest that MR-derived geometry may be sufficient for simulating articular cartilage wear in vivo and that a progressive simulation approach may be needed for the patella and tibia since both remain in continuous contact with the femur. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Development of a Multi-Centre Clinical Trial Data Archiving and Analysis Platform for Functional Imaging

    NASA Astrophysics Data System (ADS)

    Driscoll, Brandon; Jaffray, David; Coolens, Catherine

    2014-03-01

    Purpose: To provide clinicians & researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.

  1. Machining of Silicon-Ribbon-Forming Dies

    NASA Technical Reports Server (NTRS)

    Menna, A. A.

    1985-01-01

    Carbon extension for dies used in forming silicon ribbon crystals machined precisely with help of special tool. Die extension has edges beveled toward narrow flats at top, with slot precisely oriented and centered between flats and bevels. Cutting tool assembled from standard angle cutter and circular saw or saws. Angle cutters cuts bevels while slot saw cuts slot between them. In alternative version, custom-ground edges or additional circular saws also cut flats simultaneously.

  2. Age synthesis and estimation via faces: a survey.

    PubMed

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  3. A SUGGESTED CURRICULUM GUIDE FOR ELECTRO-MECHANICAL TECHNOLOGY ORIENTED SPECIFICALLY TO THE COMPUTER AND BUSINESS MACHINE FIELDS. INTERIM REPORT.

    ERIC Educational Resources Information Center

    LESCARBEAU, ROLAND F.; AND OTHERS

    A SUGGESTED POST-SECONDARY CURRICULUM GUIDE FOR ELECTRO-MECHANICAL TECHNOLOGY ORIENTED SPECIFICALLY TO THE COMPUTER AND BUSINESS MACHINE FIELDS WAS DEVELOPED BY A GROUP OF COOPERATING INSTITUTIONS, NOW INCORPORATED AS TECHNICAL EDUCATION CONSORTIUM, INCORPORATED. SPECIFIC NEEDS OF THE COMPUTER AND BUSINESS MACHINE INDUSTRY WERE DETERMINED FROM…

  4. Creating an Electronic Reference and Information Database for Computer-aided ECM Design

    NASA Astrophysics Data System (ADS)

    Nekhoroshev, M. V.; Pronichev, N. D.; Smirnov, G. V.

    2018-01-01

    The paper presents a review on electrochemical shaping. An algorithm has been developed to implement a computer shaping model applicable to pulse electrochemical machining. For that purpose, the characteristics of pulse current occurring in electrochemical machining of aviation materials have been studied. Based on integrating the experimental results and comprehensive electrochemical machining process data modeling, a subsystem for computer-aided design of electrochemical machining for gas turbine engine blades has been developed; the subsystem was implemented in the Teamcenter PLM system.

  5. Absorption of language concepts in the machine mind

    NASA Astrophysics Data System (ADS)

    Kollár, Ján

    2016-06-01

    In our approach, the machine mind is the applicative dynamic system represented by its algorithmically evolvable internal language. By other words, the mind and the language of mind are synonyms. Coming out from Shaumyan's semiotic theory of languages, we present the representation of language concepts in the machine mind as a result of our experiment, to show non-redundancy of the language of mind. To provide useful restriction for further research, we also introduce the hypothesis of semantic saturation in Computer-Computer communication, which indicates that a set of machines is not self-evolvable. The goal of our research is to increase the abstraction of Human-Computer and Computer-Computer communication. If we want humans and machines comunicate as a parent with the child, using different symbols and media, we must find the language of mind commonly usable by both machines and humans. In our opinion, there exist a kind of calm language of thinking, which we try to propose for machines in this paper. We separate the layers of a machine mind, we present the structure of the evolved mind and we discuss the selected properties. We are concentrating on the representation of symbolized concepts in the mind, that are languages, not just grammars, since they have meaning.

  6. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  7. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  8. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  9. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  10. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  11. FINISHED CASTINGS ARE ONLY GROUND BEFORE THEY ARE SHIPPED TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FINISHED CASTINGS ARE ONLY GROUND BEFORE THEY ARE SHIPPED TO CUSTOMERS WHO COMPLETE THE FINISHING IN THEIR OWN MACHINE SHOPS. - Southern Ductile Casting Company, Grinding & Shipping, 2217 Carolina Avenue, Bessemer, Jefferson County, AL

  12. Physarum machines: encapsulating reaction-diffusion to compute spanning tree

    NASA Astrophysics Data System (ADS)

    Adamatzky, Andrew

    2007-12-01

    The Physarum machine is a biological computing device, which employs plasmodium of Physarum polycephalum as an unconventional computing substrate. A reaction-diffusion computer is a chemical computing device that computes by propagating diffusive or excitation wave fronts. Reaction-diffusion computers, despite being computationally universal machines, are unable to construct certain classes of proximity graphs without the assistance of an external computing device. I demonstrate that the problem can be solved if the reaction-diffusion system is enclosed in a membrane with few ‘growth points’, sites guiding the pattern propagation. Experimental approximation of spanning trees by P. polycephalum slime mold demonstrates the feasibility of the approach. Findings provided advance theory of reaction-diffusion computation by enriching it with ideas of slime mold computation.

  13. Perspex machine: V. Compilation of C programs

    NASA Astrophysics Data System (ADS)

    Spanner, Matthew P.; Anderson, James A. D. W.

    2006-01-01

    The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.

  14. Use of prefabricated titanium abutments and customized anatomic lithium disilicate structures for cement-retained implant restorations in the esthetic zone.

    PubMed

    Lin, Wei-Shao; Harris, Bryan T; Zandinejad, Amirali; Martin, William C; Morton, Dean

    2014-03-01

    This report describes the fabrication of customized abutments consisting of prefabricated 2-piece titanium abutments and customized anatomic lithium disilicate structures for cement-retained implant restorations in the esthetic zone. The heat-pressed lithium disilicate provides esthetic customized anatomic structures and crowns independently of the computer-aided design and computer-aided manufacturing process. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. Ten quick tips for machine learning in computational biology.

    PubMed

    Chicco, Davide

    2017-01-01

    Machine learning has become a pivotal tool for many projects in computational biology, bioinformatics, and health informatics. Nevertheless, beginners and biomedical researchers often do not have enough experience to run a data mining project effectively, and therefore can follow incorrect practices, that may lead to common mistakes or over-optimistic results. With this review, we present ten quick tips to take advantage of machine learning in any computational biology context, by avoiding some common errors that we observed hundreds of times in multiple bioinformatics projects. We believe our ten suggestions can strongly help any machine learning practitioner to carry on a successful project in computational biology and related sciences.

  16. Incorporating conditional random fields and active learning to improve sentiment identification.

    PubMed

    Zhang, Kunpeng; Xie, Yusheng; Yang, Yi; Sun, Aaron; Liu, Hengchang; Choudhary, Alok

    2014-10-01

    Many machine learning, statistical, and computational linguistic methods have been developed to identify sentiment of sentences in documents, yielding promising results. However, most of state-of-the-art methods focus on individual sentences and ignore the impact of context on the meaning of a sentence. In this paper, we propose a method based on conditional random fields to incorporate sentence structure and context information in addition to syntactic information for improving sentiment identification. We also investigate how human interaction affects the accuracy of sentiment labeling using limited training data. We propose and evaluate two different active learning strategies for labeling sentiment data. Our experiments with the proposed approach demonstrate a 5%-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Digital Low Level RF Systems for Fermilab Main Ring and Tevatron

    NASA Astrophysics Data System (ADS)

    Chase, B.; Barnes, B.; Meisner, K.

    1997-05-01

    At Fermilab, a new Low Level RF system is successfully installed and operating in the Main Ring. Installation is proceeding for a Tevatron system. This upgrade replaces aging CAMAC/NIM components for an increase in accuracy, reliability, and flexibility. These VXI systems are based on a custom three channel direct digital synthesizer(DDS) module. Each synthesizer channel is capable of independent or ganged operation for both frequency and phase modulation. New frequency and phase values are computed at a 100kHz rate on the module's Analog Devices ADSP21062 (SHARC) digital signal processor. The DSP concurrently handles feedforward, feedback, and beam manipulations. Higher level state machines and the control system interface are handled at the crate level using the VxWorks operating system. This paper discusses the hardware, software and operational aspects of these LLRF systems.

  18. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    PubMed

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Generation of Custom DSP Transform IP Cores: Case Study Walsh-Hadamard Transform

    DTIC Science & Technology

    2002-09-01

    mathematics and hardware design What I know: Finite state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing...state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing Adaptive filter theory … A math guy A hardware engineer...Synthesis Technology Libary Bit-width (8) HF factor (1,2,3,6) VF factor (1,2,4, ... 32) Xilinx FPGA Place&Route Xilinx FPGA Place&Route Performance

  20. Tempest gas turbine extends EGT product line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chellini, R.

    With the introduction of the 7.8 MW (mechanical output) Tempest gas turbine, ECT has extended the company`s line of its small industrial turbines. The new Tempest machine, featuring a 7.5 MW electric output and a 33% thermal efficiency, ranks above the company`s single-shaft Typhoon gas turbine, rated 3.2 and 4.9 MW, and the 6.3 MW Tornado gas turbine. All three machines are well-suited for use in combined heat and power (CHP) plants, as demonstrated by the fact that close to 50% of the 150 Typhoon units sold are for CHP applications. This experience has induced EGT, of Lincoln, England, tomore » announce the introduction of the new gas turbine prior to completion of the testing program. The present single-shaft machine is expected to be used mainly for industrial trial cogeneration. This market segment, covering the needs of paper mills, hospitals, chemical plants, ceramic industry, etc., is a typical local market. Cogeneration plants are engineered according to local needs and have to be assisted by local organizations. For this reason, to efficiently cover the world market, EGT has selected a number of associates that will receive from Lincoln completely engineered machine packages and will engineer the cogeneration system according to custom requirements. These partners will also assist the customer and dispose locally of the spares required for maintenance operations.« less

  1. INFIBRA: machine vision inspection of acrylic fiber production

    NASA Astrophysics Data System (ADS)

    Davies, Roger; Correia, Bento A. B.; Contreiras, Jose; Carvalho, Fernando D.

    1998-10-01

    This paper describes the implementation of INFIBRA, a machine vision system for the inspection of acrylic fiber production lines. The system was developed by INETI under a contract from Fisipe, Fibras Sinteticas de Portugal, S.A. At Fisipe there are ten production lines in continuous operation, each approximately 40 m in length. A team of operators used to perform periodic manual visual inspection of each line in conditions of high ambient temperature and humidity. It is not surprising that failures in the manual inspection process occurred with some frequency, with consequences that ranged from reduced fiber quality to production stoppages. The INFIBRA system architecture is a specialization of a generic, modular machine vision architecture based on a network of Personal Computers (PCs), each equipped with a low cost frame grabber. Each production line has a dedicated PC that performs automatic inspection, using specially designed metrology algorithms, via four video cameras located at key positions on the line. The cameras are mounted inside custom-built, hermetically sealed water-cooled housings to protect them from the unfriendly environment. The ten PCs, one for each production line, communicate with a central PC via a standard Ethernet connection. The operator controls all aspects of the inspection process, from configuration through to handling alarms, via a simple graphical interface on the central PC. At any time the operator can also view on the central PC's screen the live image from any one of the 40 cameras employed by the system.

  2. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    PubMed

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them.

  3. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    PubMed Central

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly customized versions from a shared code base. This shared community toolkit enables application specific analysis platforms on the cloud by minimizing the effort required to prepare and maintain them. PMID:22429538

  4. Automated apparatus and method of generating native code for a stitching machine

    NASA Technical Reports Server (NTRS)

    Miller, Jeffrey L. (Inventor)

    2000-01-01

    A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.

  5. Measured impacts of high efficiency domestic clothes washers in a community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, J.; Rizy, T.

    1998-07-01

    The US market for domestic clothes washers is currently dominated by conventional vertical-axis washers that typically require approximately 40 gallons of water for each wash load. Although the current market for high efficiency clothes washers that use much less water and energy is quite small, it is growing slowly as manufacturers make machines based on tumble action, horizontal-axis designs available and as information about the performance and benefits of such machines is developed and made available to consumers. To help build awareness of these benefits and to accelerate markets for high efficiency washers, the Department of Energy (DOE), under itsmore » ENERGY STAR{reg_sign} Program and in cooperation with a major manufacturers of high efficiency washers, conducted a field evaluation of high efficiency washers using Bern, Kansas as a test bed. Baseline washing machine performance data as well as consumer washing behavior were obtained from data collected on the existing machines of more than 100 participants in this instrumented study. Following a 2-month initial study period, all conventional machines were replaced by high efficiency, tumble-action washers, and the study continued for 3 months. Based on measured data from over 20,000 loads of laundry, the impact of the washer replacement on (1) individual customers` energy and water consumption, (2) customers` laundry habits and perceptions, and (3) the community`s water supply and waste water systems were determined. The study, its findings, and how information from the experiment was used to improve national awareness of high efficiency clothes washer benefits are described in this paper.« less

  6. Geometric modeling of space-optimal unit-cell-based tissue engineering scaffolds

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.

    2005-04-01

    Tissue engineering involves regenerating damaged or malfunctioning organs using cells, biomolecules, and synthetic or natural scaffolds. Based on their intended roles, scaffolds can be injected as space-fillers or be preformed and implanted to provide mechanical support. Preformed scaffolds are biomimetic "trellis-like" structures which, on implantation and integration, act as tissue/organ surrogates. Customized, computer controlled, and reproducible preformed scaffolds can be fabricated using Computer Aided Design (CAD) techniques and rapid prototyping devices. A curved, monolithic construct with minimal surface area constitutes an efficient substrate geometry that promotes cell attachment, migration and proliferation. However, current CAD approaches do not provide such a biomorphic construct. We address this critical issue by presenting one of the very first physical realizations of minimal surfaces towards the construction of efficient unit-cell based tissue engineering scaffolds. Mask programmability, and optimal packing density of triply periodic minimal surfaces are used to construct the optimal pore geometry. Budgeted polygonization, and progressive minimal surface refinement facilitate the machinability of these surfaces. The efficient stress distributions, as deduced from the Finite Element simulations, favor the use of these scaffolds for orthopedic applications.

  7. Effect of data truncation in an implementation of pixel clustering on a custom computing machine

    NASA Astrophysics Data System (ADS)

    Leeser, Miriam E.; Theiler, James P.; Estlick, Michael; Kitaryeva, Natalya V.; Szymanski, John J.

    2000-10-01

    We investigate the effect of truncating the precision of hyperspectral image data for the purpose of more efficiently segmenting the image using a variant of k-means clustering. We describe the implementation of the algorithm on field-programmable gate array (FPGA) hardware. Truncating the data to only a few bits per pixel in each spectral channel permits a more compact hardware design, enabling greater parallelism, and ultimately a more rapid execution. It also enables the storage of larger images in the onboard memory. In exchange for faster clustering, however, one trades off the quality of the produced segmentation. We find, however, that the clustering algorithm can tolerate considerable data truncation with little degradation in cluster quality. This robustness to truncated data can be extended by computing the cluster centers to a few more bits of precision than the data. Since there are so many more pixels than centers, the more aggressive data truncation leads to significant gains in the number of pixels that can be stored in memory and processed in hardware concurrently.

  8. Rapid prototyping--when virtual meets reality.

    PubMed

    Beguma, Zubeda; Chhedat, Pratik

    2014-01-01

    Rapid prototyping (RP) describes the customized production of solid models using 3D computer data. Over the past decade, advances in RP have continued to evolve, resulting in the development of new techniques that have been applied to the fabrication of various prostheses. RP fabrication technologies include stereolithography (SLA), fused deposition modeling (FDM), computer numerical controlled (CNC) milling, and, more recently, selective laser sintering (SLS). The applications of RP techniques for dentistry include wax pattern fabrication for dental prostheses, dental (facial) prostheses mold (shell) fabrication, and removable dental prostheses framework fabrication. In the past, a physical plastic shape of the removable partial denture (RPD) framework was produced using an RP machine, and then used as a sacrificial pattern. Yet with the advent of the selective laser melting (SLM) technique, RPD metal frameworks can be directly fabricated, thereby omitting the casting stage. This new approach can also generate the wax pattern for facial prostheses directly, thereby reducing labor-intensive laboratory procedures. Many people stand to benefit from these new RP techniques for producing various forms of dental prostheses, which in the near future could transform traditional prosthodontic practices.

  9. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  10. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  11. Visualization techniques for computer network defense

    NASA Astrophysics Data System (ADS)

    Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew

    2011-06-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.

  12. Specification and preliminary design of an array processor

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Graham, M. L.

    1975-01-01

    The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.

  13. Tug-Of-War Model for Two-Bandit Problem

    NASA Astrophysics Data System (ADS)

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    The amoeba of the true slime mold Physarum polycephalum shows high computational capabilities. In the so-called amoeba-based computing, some computing tasks including combinatorial optimization are performed by the amoeba instead of a digital computer. We expect that there must be problems living organisms are good at solving. The “multi-armed bandit problem” would be the one of such problems. Consider a number of slot machines. Each of the machines has an arm which gives a player a reward with a certain probability when pulled. The problem is to determine the optimal strategy for maximizing the total reward sum after a certain number of trials. To maximize the total reward sum, it is necessary to judge correctly and quickly which machine has the highest reward probability. Therefore, the player should explore many machines to gather much knowledge on which machine is the best, but should not fail to exploit the reward from the known best machine. We consider that living organisms follow some efficient method to solve the problem.

  14. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo.

    PubMed

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-11-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.

  15. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo

    PubMed Central

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-01-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016

  16. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

  17. Setting new standards for customer advocacy.

    PubMed

    McDonald, L

    1993-01-01

    Dell Computer Corporation pioneered the direct marketing of personal computers in 1984 and became the first company in the PC industry to offer manufacturer-direct technical support. According to surveys of corporate buyers, the company provides the best after-sale service and support of any computer maker. Here's how Dell has institutionalized the delivery of customer satisfaction.

  18. In vitro molecular machine learning algorithm via symmetric internal loops of DNA.

    PubMed

    Lee, Ji-Hoon; Lee, Seung Hwan; Baek, Christina; Chun, Hyosun; Ryu, Je-Hwan; Kim, Jin-Woo; Deaton, Russell; Zhang, Byoung-Tak

    2017-08-01

    Programmable biomolecules, such as DNA strands, deoxyribozymes, and restriction enzymes, have been used to solve computational problems, construct large-scale logic circuits, and program simple molecular games. Although studies have shown the potential of molecular computing, the capability of computational learning with DNA molecules, i.e., molecular machine learning, has yet to be experimentally verified. Here, we present a novel molecular learning in vitro model in which symmetric internal loops of double-stranded DNA are exploited to measure the differences between training instances, thus enabling the molecules to learn from small errors. The model was evaluated on a data set of twenty dialogue sentences obtained from the television shows Friends and Prison Break. The wet DNA-computing experiments confirmed that the molecular learning machine was able to generalize the dialogue patterns of each show and successfully identify the show from which the sentences originated. The molecular machine learning model described here opens the way for solving machine learning problems in computer science and biology using in vitro molecular computing with the data encoded in DNA molecules. Copyright © 2017. Published by Elsevier B.V.

  19. Evaluating the Security of Machine Learning Algorithms

    DTIC Science & Technology

    2008-05-20

    Two far-reaching trends in computing have grown in significance in recent years. First, statistical machine learning has entered the mainstream as a...computing applications. The growing intersection of these trends compels us to investigate how well machine learning performs under adversarial conditions... machine learning has a structure that we can use to build secure learning systems. This thesis makes three high-level contributions. First, we develop a

  20. In vitro evaluation of marginal discrepancy of monolithic zirconia restorations fabricated with different CAD-CAM systems.

    PubMed

    Hamza, Tamer A; Sherif, Rana M

    2017-06-01

    Dental laboratories use different computer-aided design and computer-aided manufacturing (CAD-CAM) systems to fabricate fixed prostheses; however, limited evidence is available concerning which system provides the best marginal discrepancy. The purpose of this in vitro study was to evaluate the marginal fit of 5 different monolithic zirconia restorations milled with different CAD-CAM systems. Thirty monolithic zirconia crowns were fabricated on a custom-designed stainless steel die and were divided into 5 groups according to the type of monolithic zirconia crown and the CAD-CAM system used: group TZ, milled with an MCXL milling machine; group CZ, translucent zirconia milled with a motion milling machine; group ZZ, zirconia milled with a dental milling unit; group PZ, translucent zirconia milled with a zirconia milling unit; and group BZ, solid zirconia milled using an S1 VHF milling machine. The marginal fit was measured with a binocular microscope at an original magnification of ×100. The results were tabulated and statistically analyzed with 1-way ANOVA and post hoc surface range test, and pairwise multiple comparisons were made using Bonferroni correction (α=.05). The type of CAD-CAM used affected the marginal fit of the monolithic restoration. The mean (±SD) highest marginal discrepancy was recorded in group TZI at 39.3 ±2.3 μm, while the least mean marginal discrepancy was recorded in group IZ (22.8 ±8.9 μm). The Bonferroni post hoc test showed that group TZI was significantly different from all other groups tested (P<.05). Within the limitation of this in vitro study, all tested CAD-CAM systems produced monolithic zirconia restorations with clinically acceptable marginal discrepancies; however, the CAD-CAM system with the 5-axis milling unit produced the best marginal fit. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  1. 77 FR 70478 - Notice of Determinations Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... Machines (IBM), Large Panel Assembly and Test Division (LPAT), Manpower. 81,982 Leistritz Rural Hall, NC..., Inc., Outbound PA. Customer Service Team. 82,074 Komax Solar, Inc., York, PA. Komax Holdings AG...

  2. 75 FR 81563 - Notice of Petitions by Firms for Determination of Eligibility To Apply for Trade Adjustment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ..., CO 80027. mountings, fittings, and other machined metal components for aerospace applications. Foam.... custom packaging kits, gaskets, seals, sheets, blocks, etc., of all types of foam materials. Gulf Fish...

  3. Computer-Aided Design and Computer-Aided Manufacturing Hydroxyapatite/Epoxide Acrylate Maleic Compound Construction for Craniomaxillofacial Bone Defects.

    PubMed

    Zhang, Lei; Shen, Shunyao; Yu, Hongbo; Shen, Steve Guofang; Wang, Xudong

    2015-07-01

    The aim of this study was to investigate the use of computer-aided design and computer-aided manufacturing hydroxyapatite (HA)/epoxide acrylate maleic (EAM) compound construction artificial implants for craniomaxillofacial bone defects. Computed tomography, computer-aided design/computer-aided manufacturing and three-dimensional reconstruction, as well as rapid prototyping were performed in 12 patients between 2008 and 2013. The customized HA/EAM compound artificial implants were manufactured through selective laser sintering using a rapid prototyping machine into the exact geometric shapes of the defect. The HA/EAM compound artificial implants were then implanted during surgical reconstruction. Color-coded superimpositions demonstrated the discrepancy between the virtual plan and achieved results using Geomagic Studio. As a result, the HA/EAM compound artificial bone implants were perfectly matched with the facial areas that needed reconstruction. The postoperative aesthetic and functional results were satisfactory. The color-coded superimpositions demonstrated good consistency between the virtual plan and achieved results. The three-dimensional maximum deviation is 2.12 ± 0.65  mm and the three-dimensional mean deviation is 0.27 ± 0.07  mm. No facial nerve weakness or pain was observed at the follow-up examinations. Only 1 implant had to be removed 2 months after the surgery owing to severe local infection. No other complication was noted during the follow-up period. In conclusion, computer-aided, individually fabricated HA/EAM compound construction artificial implant was a good craniomaxillofacial surgical technique that yielded improved aesthetic results and functional recovery after reconstruction.

  4. System, methods and apparatus for program optimization for multi-threaded processor architectures

    DOEpatents

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  5. Man Machine Systems in Education.

    ERIC Educational Resources Information Center

    Sall, Malkit S.

    This review of the research literature on the interaction between humans and computers discusses how man machine systems can be utilized effectively in the learning-teaching process, especially in secondary education. Beginning with a definition of man machine systems and comments on the poor quality of much of the computer-based learning material…

  6. Procedure and computer program to calculate machine contribution to sawmill recovery

    Treesearch

    Philip H. Steele; Hiram Hallock; Stanford Lunstrum

    1981-01-01

    The importance of considering individual machine contribution to total mill efficiency is discussed. A method for accurately calculating machine contribution is introduced, and an example is given using this method. A FORTRAN computer program to make the necessary complex calculations automatically is also presented with user instructions.

  7. Learning Machine, Vietnamese Based Human-Computer Interface.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The sixth session of IT@EDU98 consisted of seven papers on the topic of the learning machine--Vietnamese based human-computer interface, and was chaired by Phan Viet Hoang (Informatics College, Singapore). "Knowledge Based Approach for English Vietnamese Machine Translation" (Hoang Kiem, Dinh Dien) presents the knowledge base approach,…

  8. Parallel Algorithms for Computer Vision

    DTIC Science & Technology

    1990-04-01

    NA86-1, Thinking Machines Corporation, Cambridge, MA, December 1986. [43] J. Little, G. Blelloch, and T. Cass. How to program the connection machine for... to program the connection machine for computer vision. In Proc. Workshop on Comp. Architecture for Pattern Analysis and Machine Intell., 1987. [92] J...In Proceedings of SPIE Conf. on Advances in Intelligent Robotics Systems, Bellingham, VA, 1987. SPIE. [91] J. Little, G. Blelloch, and T. Cass. How

  9. Polymorphous computing fabric

    DOEpatents

    Wolinski, Christophe Czeslaw [Los Alamos, NM; Gokhale, Maya B [Los Alamos, NM; McCabe, Kevin Peter [Los Alamos, NM

    2011-01-18

    Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

  10. Enabling Customization through Web Development: An Iterative Study of the Dell Computer Corporation Website

    ERIC Educational Resources Information Center

    Liu, Chang; Mackie, Brian G.

    2008-01-01

    Throughout the last decade, companies have increased their investment in electronic commerce (EC) by developing and implementing Web-based applications on the Internet. This paper describes a class project to develop a customized computer website which is similar to Dell Computer Corporation's (Dell) website. The objective of this project is to…

  11. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  12. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  13. Emotion Analysis of Telephone Complaints from Customer Based on Affective Computing.

    PubMed

    Gong, Shuangping; Dai, Yonghui; Ji, Jun; Wang, Jinzhao; Sun, Hai

    2015-01-01

    Customer complaint has been the important feedback for modern enterprises to improve their product and service quality as well as the customer's loyalty. As one of the commonly used manners in customer complaint, telephone communication carries rich emotional information of speeches, which provides valuable resources for perceiving the customer's satisfaction and studying the complaint handling skills. This paper studies the characteristics of telephone complaint speeches and proposes an analysis method based on affective computing technology, which can recognize the dynamic changes of customer emotions from the conversations between the service staff and the customer. The recognition process includes speaker recognition, emotional feature parameter extraction, and dynamic emotion recognition. Experimental results show that this method is effective and can reach high recognition rates of happy and angry states. It has been successfully applied to the operation quality and service administration in telecom and Internet service company.

  14. Emotion Analysis of Telephone Complaints from Customer Based on Affective Computing

    PubMed Central

    Gong, Shuangping; Ji, Jun; Wang, Jinzhao; Sun, Hai

    2015-01-01

    Customer complaint has been the important feedback for modern enterprises to improve their product and service quality as well as the customer's loyalty. As one of the commonly used manners in customer complaint, telephone communication carries rich emotional information of speeches, which provides valuable resources for perceiving the customer's satisfaction and studying the complaint handling skills. This paper studies the characteristics of telephone complaint speeches and proposes an analysis method based on affective computing technology, which can recognize the dynamic changes of customer emotions from the conversations between the service staff and the customer. The recognition process includes speaker recognition, emotional feature parameter extraction, and dynamic emotion recognition. Experimental results show that this method is effective and can reach high recognition rates of happy and angry states. It has been successfully applied to the operation quality and service administration in telecom and Internet service company. PMID:26633967

  15. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  16. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  17. Efficient and Scalable Cross-Matching of (Very) Large Catalogs

    NASA Astrophysics Data System (ADS)

    Pineau, F.-X.; Boch, T.; Derriere, S.

    2011-07-01

    Whether it be for building multi-wavelength datasets from independent surveys, studying changes in objects luminosities, or detecting moving objects (stellar proper motions, asteroids), cross-catalog matching is a technique widely used in astronomy. The need for efficient, reliable and scalable cross-catalog matching is becoming even more pressing with forthcoming projects which will produce huge catalogs in which astronomers will dig for rare objects, perform statistical analysis and classification, or real-time transients detection. We have developed a formalism and the corresponding technical framework to address the challenge of fast cross-catalog matching. Our formalism supports more than simple nearest-neighbor search, and handles elliptical positional errors. Scalability is improved by partitioning the sky using the HEALPix scheme, and processing independently each sky cell. The use of multi-threaded two-dimensional kd-trees adapted to managing equatorial coordinates enables efficient neighbor search. The whole process can run on a single computer, but could also use clusters of machines to cross-match future very large surveys such as GAIA or LSST in reasonable times. We already achieve performances where the 2MASS (˜470M sources) and SDSS DR7 (˜350M sources) can be matched on a single machine in less than 10 minutes. We aim at providing astronomers with a catalog cross-matching service, available on-line and leveraging on the catalogs present in the VizieR database. This service will allow users both to access pre-computed cross-matches across some very large catalogs, and to run customized cross-matching operations. It will also support VO protocols for synchronous or asynchronous queries.

  18. Towards machine ecoregionalization of Earth's landmass using pattern segmentation method

    NASA Astrophysics Data System (ADS)

    Nowosad, Jakub; Stepinski, Tomasz F.

    2018-07-01

    We present and evaluate a quantitative method for delineation of ecophysiographic regions throughout the entire terrestrial landmass. The method uses the new pattern-based segmentation technique which attempts to emulate the qualitative, weight-of-evidence approach to a delineation of ecoregions in a computer code. An ecophysiographic region is characterized by homogeneous physiography defined by the cohesiveness of patterns of four variables: land cover, soils, landforms, and climatic patterns. Homogeneous physiography is a necessary but not sufficient condition for a region to be an ecoregion, thus machine delineation of ecophysiographic regions is the first, important step toward global ecoregionalization. In this paper, we focus on the first-order approximation of the proposed method - delineation on the basis of the patterns of the land cover alone. We justify this approximation by the existence of significant spatial associations between various physiographic variables. Resulting ecophysiographic regionalization (ECOR) is shown to be more physiographically homogeneous than existing global ecoregionalizations (Terrestrial Ecoregions of the World (TEW) and Bailey's Ecoregions of the Continents (BEC)). The presented quantitative method has an advantage of being transparent and objective. It can be verified, easily updated, modified and customized for specific applications. Each region in ECOR contains detailed, SQL-searchable information about physiographic patterns within it. It also has a computer-generated label. To give a sense of how ECOR compares to TEW and, in the U.S., to EPA Level III ecoregions, we contrast these different delineations using two specific sites as examples. We conclude that ECOR yields regionalization somewhat similar to EPA level III ecoregions, but for the entire world, and by automatic means.

  19. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  20. Full 3-D OCT-based pseudophakic custom computer eye model

    PubMed Central

    Sun, M.; Pérez-Merino, P.; Martinez-Enriquez, E.; Velasco-Ocana, M.; Marcos, S.

    2016-01-01

    We compared measured wave aberrations in pseudophakic eyes implanted with aspheric intraocular lenses (IOLs) with simulated aberrations from numerical ray tracing on customized computer eye models, built using quantitative 3-D OCT-based patient-specific ocular geometry. Experimental and simulated aberrations show high correlation (R = 0.93; p<0.0001) and similarity (RMS for high order aberrations discrepancies within 23.58%). This study shows that full OCT-based pseudophakic custom computer eye models allow understanding the relative contribution of optical geometrical and surgically-related factors to image quality, and are an excellent tool for characterizing and improving cataract surgery. PMID:27231608

  1. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  2. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  3. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  4. Office Machine and Computer Occupations. Reprinted from the Occupational Outlook Handbook, 1978-79 Edition.

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics (DOL), Washington, DC.

    Focusing on office machine and computer occupations, this document is one in a series of forty-one reprints from the Occupational Outlook Handbook providing current information and employment projections for individual occupations and industries through 1985. The specific occupations covered in this document include business machine repairers,…

  5. Accelerating atomistic calculations of quantum energy eigenstates on graphic cards

    NASA Astrophysics Data System (ADS)

    Rodrigues, Walter; Pecchia, A.; Lopez, M.; Auf der Maur, M.; Di Carlo, A.

    2014-10-01

    Electronic properties of nanoscale materials require the calculation of eigenvalues and eigenvectors of large matrices. This bottleneck can be overcome by parallel computing techniques or the introduction of faster algorithms. In this paper we report a custom implementation of the Lanczos algorithm with simple restart, optimized for graphical processing units (GPUs). The whole algorithm has been developed using CUDA and runs entirely on the GPU, with a specialized implementation that spares memory and reduces at most machine-to-device data transfers. Furthermore parallel distribution over several GPUs has been attained using the standard message passing interface (MPI). Benchmark calculations performed on a GaN/AlGaN wurtzite quantum dot with up to 600,000 atoms are presented. The empirical tight-binding (ETB) model with an sp3d5s∗+spin-orbit parametrization has been used to build the system Hamiltonian (H).

  6. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services

    PubMed Central

    Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.

    2014-01-01

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933

  7. Experiences Building Globus Genomics: A Next-Generation Sequencing Analysis Service using Galaxy, Globus, and Amazon Web Services.

    PubMed

    Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T

    2014-09-10

    We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.

  8. Bioreactor Cultivation of Anatomically Shaped Human Bone Grafts

    PubMed Central

    Temple, Joshua P.; Yeager, Keith; Bhumiratana, Sarindr; Vunjak-Novakovic, Gordana; Grayson, Warren L.

    2015-01-01

    In this chapter, we describe a method for engineering bone grafts in vitro with the specific geometry of the temporomandibular joint (TMJ) condyle. The anatomical geometry of the bone grafts was segmented from computed tomography (CT) scans, converted to G-code, and used to machine decellularized trabecular bone scaffolds into the identical shape of the condyle. These scaffolds were seeded with human bone marrow-derived mesenchymal stem cells (MSCs) using spinner flasks and cultivated for up to 5 weeks in vitro using a custom-designed perfusion bioreactor system. The flow patterns through the complex geometry were modeled using the FloWorks module of SolidWorks to optimize bioreactor design. The perfused scaffolds exhibited significantly higher cellular content, better matrix production, and increased bone mineral deposition relative to non-perfused (static) controls after 5 weeks of in vitro cultivation. This technology is broadly applicable for creating patient-specific bone grafts of varying shapes and sizes. PMID:24014312

  9. Wearable ear EEG for brain interfacing

    NASA Astrophysics Data System (ADS)

    Schroeder, Eric D.; Walker, Nicholas; Danko, Amanda S.

    2017-02-01

    Brain-computer interfaces (BCIs) measuring electrical activity via electroencephalogram (EEG) have evolved beyond clinical applications to become wireless consumer products. Typically marketed for meditation and neu- rotherapy, these devices are limited in scope and currently too obtrusive to be a ubiquitous wearable. Stemming from recent advancements made in hearing aid technology, wearables have been shrinking to the point that the necessary sensors, circuitry, and batteries can be fit into a small in-ear wearable device. In this work, an ear-EEG device is created with a novel system for artifact removal and signal interpretation. The small, compact, cost-effective, and discreet device is demonstrated against existing consumer electronics in this space for its signal quality, comfort, and usability. A custom mobile application is developed to process raw EEG from each device and display interpreted data to the user. Artifact removal and signal classification is accomplished via a combination of support matrix machines (SMMs) and soft thresholding of relevant statistical properties.

  10. Face recognition with the Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Suarez, Pedro F.

    1991-12-01

    The major goal of this research was to investigate machine recognition of faces. The approach taken to achieve this goal was to investigate the use of Karhunen-Loe've Transform (KLT) by implementing flexible and practical code. The KLT utilizes the eigenvectors of the covariance matrix as a basis set. Faces were projected onto the eigenvectors, called eigenfaces, and the resulting projection coefficients were used as features. Face recognition accuracies for the KLT coefficients were superior to Fourier based techniques. Additionally, this thesis demonstrated the image compression and reconstruction capabilities of the KLT. This theses also developed the use of the KLT as a facial feature detector. The ability to differentiate between facial features provides a computer communications interface for non-vocal people with cerebral palsy. Lastly, this thesis developed a KLT based axis system for laser scanner data of human heads. The scanner data axis system provides the anthropometric community a more precise method of fitting custom helmets.

  11. 19 CFR 143.5 - System performance requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  12. 19 CFR 143.5 - System performance requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 2 2011-04-01 2011-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  13. 19 CFR 143.5 - System performance requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 2 2014-04-01 2014-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  14. 19 CFR 143.5 - System performance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  15. 19 CFR 143.5 - System performance requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 2 2013-04-01 2013-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  16. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  17. Fabrication of five-level ultraplanar micromirror arrays by flip-chip assembly

    NASA Astrophysics Data System (ADS)

    Michalicek, M. Adrian; Bright, Victor M.

    2001-10-01

    This paper reports a detailed study of the fabrication of various piston, torsion, and cantilever style micromirror arrays using a novel, simple, and inexpensive flip-chip assembly technique. Several rectangular and polar arrays were commercially prefabricated in the MUMPs process and then flip-chip bonded to form advanced micromirror arrays where adverse effects typically associated with surface micromachining were removed. These arrays were bonded by directly fusing the MUMPs gold layers with no complex preprocessing. The modules were assembled using a computer-controlled, custom-built flip-chip bonding machine. Topographically opposed bond pads were designed to correct for slight misalignment errors during bonding and typically result in less than 2 micrometers of lateral alignment error. Although flip-chip micromirror performance is briefly discussed, the means used to create these arrays is the focus of the paper. A detailed study of flip-chip process yield is presented which describes the primary failure mechanisms for flip-chip bonding. Studies of alignment tolerance, bonding force, stress concentration, module planarity, bonding machine calibration techniques, prefabrication errors, and release procedures are presented in relation to specific observations in process yield. Ultimately, the standard thermo-compression flip-chip assembly process remains a viable technique to develop highly complex prototypes of advanced micromirror arrays.

  18. Time-related patient data retrieval for the case studies from the pharmacogenomics research network

    PubMed Central

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.

    2012-01-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712

  19. Time-related patient data retrieval for the case studies from the pharmacogenomics research network.

    PubMed

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G

    2012-11-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.

  20. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  1. What is consciousness, and could machines have it?

    PubMed

    Dehaene, Stanislas; Lau, Hakwan; Kouider, Sid

    2017-10-27

    The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word "consciousness" conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures. Copyright © 2017, American Association for the Advancement of Science.

  2. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, J.M.

    The theory and methodology of design of general-purpose machines that may be controlled by a computer to perform all the tasks of a set of special-purpose machines is the focus of modern machine design research. These seventeen contributions chronicle recent activity in the analysis and design of robot manipulators that are the prototype of these general-purpose machines. They focus particularly on kinematics, the geometry of rigid-body motion, which is an integral part of machine design theory. The challenges to kinematics researchers presented by general-purpose machines such as the manipulator are leading to new perspectives in the design and control ofmore » simpler machines with two, three, and more degrees of freedom. Researchers are rethinking the uses of gear trains, planar mechanisms, adjustable mechanisms, and computer controlled actuators in the design of modern machines.« less

  4. Solving the Cauchy-Riemann equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    Discussed is the implementation of a single algorithm on three parallel-vector computers. The algorithm is a relaxation scheme for the solution of the Cauchy-Riemann equations; a set of coupled first order partial differential equations. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, and SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The machine architectures are briefly described. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Conclusions are presented.

  5. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  6. The limits of scale.

    PubMed

    Halaburda, Hanna; Oberholzer-Gee, Felix

    2014-04-01

    The value of many products and services rises or falls with the number of customers using them; the fewer fax machines in use, the less important it is to have one. These network effects influence consumer decisions and affect companies' ability to compete. Strategists have developed some well-known rules for navigating business environments with network effects. "Move first" is one, and "get big fast" is another. In a study of dozens of companies, however, the authors found that quite often the conventional wisdom was dead wrong. And when the rules failed, the reason was always the same: Companies trip up when they try to attract large volumes of customers without understanding (1) the strength of mutual attraction among various customer groups and (2) the extent of asymmetric attraction among them. Looking at examples such as TripAdvisor, Wikipedia, and the New York Times, the authors offer strategies for competing in markets with network effects. New entrants should focus on customer groups that they are uniquely positioned to serve or appeal to the most attractive customers in a market. Incumbents pursuing growth strategies in adjacent markets or new geographies should consider how similar the needs of new customers are to those of existing customers. Offering complements also allows incumbents to reach additional customer groups.

  7. 19 CFR 191.24 - Certificate of manufacture and delivery.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 191.24 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... manufactured or produced under a general manufacturing drawback ruling, the unique computer-generated number... manufactured or produced under a specific manufacturing drawback ruling, either the unique computer number or...

  8. 19 CFR 191.24 - Certificate of manufacture and delivery.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Section 191.24 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... manufactured or produced under a general manufacturing drawback ruling, the unique computer-generated number... manufactured or produced under a specific manufacturing drawback ruling, either the unique computer number or...

  9. 19 CFR 191.24 - Certificate of manufacture and delivery.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Section 191.24 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... manufactured or produced under a general manufacturing drawback ruling, the unique computer-generated number... manufactured or produced under a specific manufacturing drawback ruling, either the unique computer number or...

  10. 19 CFR 191.24 - Certificate of manufacture and delivery.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Section 191.24 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... manufactured or produced under a general manufacturing drawback ruling, the unique computer-generated number... manufactured or produced under a specific manufacturing drawback ruling, either the unique computer number or...

  11. 19 CFR 191.24 - Certificate of manufacture and delivery.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Section 191.24 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... manufactured or produced under a general manufacturing drawback ruling, the unique computer-generated number... manufactured or produced under a specific manufacturing drawback ruling, either the unique computer number or...

  12. THE COMPUTER CONCEPT OF SELF-INSTRUCTIONAL DEVICES.

    ERIC Educational Resources Information Center

    SILBERMAN, HARRY F.

    THE COMPUTER SYSTEM CONCEPT WILL BE DEVELOPED IN TWO WAYS--FIRST, A DESCRIPTION WILL BE MADE OF THE SMALL COMPUTER-BASED TEACHING MACHINE WHICH IS BEING USED AS A RESEARCH TOOL, SECOND, A DESCRIPTION WILL BE MADE OF THE LARGE COMPUTER LABORATORY FOR AUTOMATED SCHOOL SYSTEMS WHICH ARE BEING DEVELOPED. THE FIRST MACHINE CONSISTS OF THREE ELEMENTS--…

  13. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  14. The Molecular Industrial Revolution: Automated Synthesis of Small Molecules

    PubMed Central

    Trobe, Melanie; Burke, Martin D.

    2018-01-01

    The eighteenth and nineteenth centuries marked a sweeping transition from manual to automated manufacturing on the macroscopic scale. This enabled an unmatched period of human innovation that helped drive the Industrial Revolution. The impact on society was transformative, ultimately yielding substantial improvements in living conditions and lifespan in many parts of the world. During the same time period, the first manual syntheses of organic molecules was achieved. Now, two centuries later, we are poised for an analogous transition from highly customized crafting of specific molecular targets by hand to the increasingly general and automated assembly of many different types of molecules with the push of a button. Automation of customized small molecule synthesis pathways is already enabling safer, more reproducible, and readily scalable production of specific targets, and general machines now exist for the synthesis of a wide range of different peptides, oligonucleotides, and oligosaccharides. Creating general machines that are similarly capable of making many different types of small molecules on-demand, akin to that which has been achieved on the macroscopic scale with 3D printers, has proven to be substantially more challenging. Yet important progress is being made toward this potentially transformative objective with two complementary approaches: (1) automation of customized synthesis routes to different targets via machines that enable use of many different reactions and starting materials, and (2) automation of generalized platforms that make many different targets using common coupling chemistry and building blocks. Continued progress in these exciting directions has the potential to shift the bottleneck in molecular innovation from synthesis to imagination, and thereby help drive a new industrial revolution on the molecular scale. PMID:29513400

  15. A Turing Machine Simulator.

    ERIC Educational Resources Information Center

    Navarro, Aaron B.

    1981-01-01

    Presents a program in Level II BASIC for a TRS-80 computer that simulates a Turing machine and discusses the nature of the device. The program is run interactively and is designed to be used as an educational tool by computer science or mathematics students studying computational or automata theory. (MP)

  16. NREL`s variable speed test bed: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlin, P.W.; Fingersh, L.J.; Fuchs, E.F.

    1996-10-01

    Under an NREL subcontract, the Electrical and Computer Engineering Department of the University of Colorado (CU) designed a 20-kilowatt, 12-pole, permanent-magnet, electric generator and associated custom power electronics modules. This system can supply power over a generator speed range from 60 to 120 RPM. The generator was fabricated and assembled by the Denver electric-motor manufacturer, Unique Mobility, and the power electronics modules were designed and fabricated at the University. The generator was installed on a 56-foot tower in the modified nacelle of a Grumman Windstream 33 wind turbine in early October 1995. For checkout it was immediately loaded directly intomore » a three-phase resistive load in which it produced 3.5 kilowatts of power. Abstract only included. The ten-meter Grumman host wind machine is equipped with untwisted, untapered, NREL series S809 blades. The machine was instrumented to record both mechanical hub power and electrical power delivered to the utility. Initial tests are focusing on validating the calculated power surface. This mathematical surface shows the wind machine power as a function of both wind speed and turbine rotor speed. Upon the completion of this task, maximum effort will be directed toward filling a test matrix in which variable-speed operation will be contrasted with constant-speed mode by switching the variable speed control algorithm with the baseline constant speed control algorithm at 10 minutes time intervals. Other quantities in the test matrix will be analyzed to detect variable speed-effects on structural loads and power quality.« less

  17. Could EBT Machines Increase Fruit and Vegetable Purchases at New York City Green Carts?

    PubMed

    Breck, Andrew; Kiszko, Kamila; Martinez, Olivia; Abrams, Courtney; Elbel, Brian

    2017-09-21

    Residents of some low-income neighborhoods have limited access to fresh fruits and vegetables. In 2008, New York City issued new mobile fruit and vegetable cart licenses for neighborhoods with inadequate availability of fresh produce. Some of these carts were equipped with electronic benefit transfer (EBT) machines, allowing them to accept Supplemental Nutrition Assistance Program (SNAP) benefits. This article examines the association between type and quantities of fruits and vegetables purchased from mobile fruit and vegetable vendors and consumer characteristics, including payment method. Customers at 4 produce carts in the Bronx, New York, were surveyed during 3 periods in 2013 and 2014. Survey data, including purchased fruit and vegetable quantities, were analyzed using multivariable negative binomial regressions, with payment method (cash only vs EBT or EBT and cash) as the primary independent variable. Covariates included availability of EBT, vendor, and customer sociodemographic characteristics. A total of 779 adults participated in this study. Shoppers who used SNAP benefits purchased an average of 5.4 more cup equivalents of fruits and vegetables than did shoppers who paid with cash. Approximately 80% of this difference was due to higher quantities of purchased fruits. Expanding access to EBT machines at mobile produce carts may increase purchases of fruits and vegetables from these vendors.

  18. Could EBT Machines Increase Fruit and Vegetable Purchases at New York City Green Carts?

    PubMed Central

    Breck, Andrew; Kiszko, Kamila; Martinez, Olivia; Abrams, Courtney

    2017-01-01

    Introduction Residents of some low-income neighborhoods have limited access to fresh fruits and vegetables. In 2008, New York City issued new mobile fruit and vegetable cart licenses for neighborhoods with inadequate availability of fresh produce. Some of these carts were equipped with electronic benefit transfer (EBT) machines, allowing them to accept Supplemental Nutrition Assistance Program (SNAP) benefits. This article examines the association between type and quantities of fruits and vegetables purchased from mobile fruit and vegetable vendors and consumer characteristics, including payment method. Methods Customers at 4 produce carts in the Bronx, New York, were surveyed during 3 periods in 2013 and 2014. Survey data, including purchased fruit and vegetable quantities, were analyzed using multivariable negative binomial regressions, with payment method (cash only vs EBT or EBT and cash) as the primary independent variable. Covariates included availability of EBT, vendor, and customer sociodemographic characteristics. Results A total of 779 adults participated in this study. Shoppers who used SNAP benefits purchased an average of 5.4 more cup equivalents of fruits and vegetables than did shoppers who paid with cash. Approximately 80% of this difference was due to higher quantities of purchased fruits. Conclusion Expanding access to EBT machines at mobile produce carts may increase purchases of fruits and vegetables from these vendors. PMID:28934080

  19. Enhanced way of securing automated teller machine to track the misusers using secure monitor tracking analysis

    NASA Astrophysics Data System (ADS)

    Sadhasivam, Jayakumar; Alamelu, M.; Radhika, R.; Ramya, S.; Dharani, K.; Jayavel, Senthil

    2017-11-01

    Now a days the people's attraction towards Automated Teller Machine(ATM) has been increasing even in rural areas. As of now the security provided by all the bank is ATM pin number. Hackers know the way to easily identify the pin number and withdraw money if they haven stolen the ATM card. Also, the Automated Teller Machine is broken and the money is stolen. To overcome these disadvantages, we propose an approach “Automated Secure Tracking System” to secure and tracking the changes in ATM. In this approach, while creating the bank account, the bank should scan the iris known (a part or movement of our eye) and fingerprint of the customer. The scanning can be done with the position of the eye movements and fingerprints identified with the shortest measurements. When the card is swiped then ATM should request the pin, scan the iris and recognize the fingerprint and then allow the customer to withdraw money. If somebody tries to break the ATM an alert message is given to the nearby police station and the ATM shutter is automatically closed. This helps in avoiding the hackers who withdraw money by stealing the ATM card and also helps the government in identifying the criminals easily.

  20. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  1. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  2. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  3. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  4. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  5. 19 CFR 4.99 - Forms; substitution.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... the instructions shall be followed. (c) The port director, in his discretion, may accept a computer printout instead of Customs Form 1302 for use at a specific port. However, to ensure that computer...

  6. 19 CFR 4.99 - Forms; substitution.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... the instructions shall be followed. (c) The port director, in his discretion, may accept a computer printout instead of Customs Form 1302 for use at a specific port. However, to ensure that computer...

  7. 19 CFR 4.99 - Forms; substitution.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... the instructions shall be followed. (c) The port director, in his discretion, may accept a computer printout instead of Customs Form 1302 for use at a specific port. However, to ensure that computer...

  8. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  9. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  10. 19 CFR 191.7 - General manufacturing drawback ruling.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Section 191.7 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... production under § 191.2(q) of this subpart. (2) Computer-generated number. With the letter of acknowledgment the drawback office shall include the unique computer-generated number assigned to the acknowledgment...

  11. 19 CFR 4.99 - Forms; substitution.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... the instructions shall be followed. (c) The port director, in his discretion, may accept a computer printout instead of Customs Form 1302 for use at a specific port. However, to ensure that computer...

  12. 19 CFR 4.99 - Forms; substitution.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... the instructions shall be followed. (c) The port director, in his discretion, may accept a computer printout instead of Customs Form 1302 for use at a specific port. However, to ensure that computer...

  13. Design and Construction of a Multi-wavelength, Micromirror Total Internal Reflectance Fluorescence Microscope

    PubMed Central

    Larson, Joshua; Kirk, Matt; Drier, Eric A.; O’Brien, William; MacKay, James F.; Friedman, Larry; Hoskins, Aaron

    2015-01-01

    Colocalization Single Molecule Spectroscopy (CoSMoS) has proven to be a useful method for studying the composition, kinetics, and mechanisms of complex cellular machines. Key to the technique is the ability to simultaneously monitor multiple proteins and/or nucleic acids as they interact with one another. Here we describe a protocol for constructing a CoSMoS micromirror Total Internal Reflection Fluorescence Microscope (mmTIRFM). Design and construction of a scientific microscope often requires a number of custom components and a significant time commitment. In our protocol, we have streamlined this process by implementation of a commercially available microscopy platform designed to accommodate the optical components necessary for a mmTIRFM. The mmTIRF system eliminates the need for machining custom parts by the end-user and facilitates optical alignment. Depending on the experience-level of the microscope builder, these time-savings and the following protocol can enable mmTIRF construction to be completed within two months. PMID:25188633

  14. Design and construction of a multiwavelength, micromirror total internal reflectance fluorescence microscope.

    PubMed

    Larson, Joshua; Kirk, Matt; Drier, Eric A; O'Brien, William; MacKay, James F; Friedman, Larry J; Hoskins, Aaron A

    2014-10-01

    Colocalization single-molecule spectroscopy (CoSMoS) has proven to be a useful method for studying the composition, kinetics and mechanisms of complex cellular machines. Key to the technique is the ability to simultaneously monitor multiple proteins and/or nucleic acids as they interact with one another. Here we describe a protocol for constructing a CoSMoS micromirror total internal reflection fluorescence microscope (mmTIRFM). Design and construction of a scientific microscope often requires a number of custom components and a substantial time commitment. In our protocol, we have streamlined this process by implementation of a commercially available microscopy platform designed to accommodate the optical components necessary for an mmTIRFM. The mmTIRF system eliminates the need for machining custom parts by the end user and facilitates optical alignment. Depending on the experience level of the microscope builder, these time savings and the following protocol can enable mmTIRF construction to be completed within 2 months.

  15. The Molecular Industrial Revolution: Automated Synthesis of Small Molecules.

    PubMed

    Trobe, Melanie; Burke, Martin D

    2018-04-09

    Today we are poised for a transition from the highly customized crafting of specific molecular targets by hand to the increasingly general and automated assembly of different types of molecules with the push of a button. Creating machines that are capable of making many different types of small molecules on demand, akin to that which has been achieved on the macroscale with 3D printers, is challenging. Yet important progress is being made toward this objective with two complementary approaches: 1) Automation of customized synthesis routes to different targets by machines that enable the use of many reactions and starting materials, and 2) automation of generalized platforms that make many different targets using common coupling chemistry and building blocks. Continued progress in these directions has the potential to shift the bottleneck in molecular innovation from synthesis to imagination, and thereby help drive a new industrial revolution on the molecular scale. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  17. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  18. Introduction to the theory of machines and languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weidhaas, P. P.

    1976-04-01

    This text is intended to be an elementary ''guided tour'' through some basic concepts of modern computer science. Various models of computing machines and formal languages are studied in detail. Discussions center around questions such as, ''What is the scope of problems that can or cannot be solved by computers.''

  19. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  20. Recursive computer architecture for VLSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treleaven, P.C.; Hopkins, R.P.

    1982-01-01

    A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.

  1. Drill user's manual. [drilling machine automation

    NASA Technical Reports Server (NTRS)

    Pitts, E. A.

    1976-01-01

    Instructions are given for using the DRILL computer program which converts data contained in an Interactive Computer Graphics System (IGDS) design file to production of a paper tape for driving a numerically controlled drilling machine.

  2. Application of high-performance computing to numerical simulation of human movement

    NASA Technical Reports Server (NTRS)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  3. 14 CFR § 1214.107 - Postponement.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Provisions Regarding Space Shuttle Flights of Payloads for Non-U.S. Government, Reimbursable Customers § 1214... customer. (b) A customer postponing the flight of a payload will pay a postponement fee to NASA. The fee will be computed as a percentage of the customer's Shuttle standard flight price and will be based on...

  4. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  5. USSR Report, Kommunist, No. 13, September 1986.

    DTIC Science & Technology

    1987-01-07

    all-union) program for specialization of NPO and industrial enterprises and their scientific research institutes and design bureaus could play a major...machine tools with numerical programming (ChPU), processing centers, automatic machines and groups of automatic machines controlled by computers, and...automatic lines, computer- controlled groups of equipment, comprehensively automated shops and sections) is the most important feature of high technical

  6. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  7. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 3 2014-04-01 2014-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  8. 19 CFR 201.14 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 3 2013-04-01 2013-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 201.14 Section 201.14 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations...

  9. 19 CFR 210.6 - Computation of time, additional hearings, postponements, continuances, and extensions of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Computation of time, additional hearings, postponements, continuances, and extensions of time. 210.6 Section 210.6 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT...

  10. 19 CFR 10.14 - Fabricated components subject to the exemption.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ....14 Section 10.14 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... assembly for a computer is assembled in the United States by soldering American-made and foreign-made... electronic function and is ready for incorporation into the computer. The foreign-made components have...

  11. 19 CFR 10.14 - Fabricated components subject to the exemption.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ....14 Section 10.14 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... assembly for a computer is assembled in the United States by soldering American-made and foreign-made... electronic function and is ready for incorporation into the computer. The foreign-made components have...

  12. 19 CFR 10.14 - Fabricated components subject to the exemption.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ....14 Section 10.14 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... assembly for a computer is assembled in the United States by soldering American-made and foreign-made... electronic function and is ready for incorporation into the computer. The foreign-made components have...

  13. 19 CFR 10.14 - Fabricated components subject to the exemption.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ....14 Section 10.14 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... assembly for a computer is assembled in the United States by soldering American-made and foreign-made... electronic function and is ready for incorporation into the computer. The foreign-made components have...

  14. Customized Geological Map Patterns for the Macintosh Computer.

    ERIC Educational Resources Information Center

    Boyer, Paul Slayton

    1986-01-01

    Describes how the graphics capabilities of the Apple Macintosh computer can be used in geological teaching by customizing fill patterns with lithologic symbols. Presents two methods for doing this: creating a dummy document, or by changing the pattern resource resident in the operating system. Special symbols can also replace fonts. (TW)

  15. A custom hardware classifier for bruised apple detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.

    2015-09-01

    We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.

  16. Design of a high-speed digital processing element for parallel simulation

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Cwynar, D. S.

    1983-01-01

    A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.

  17. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  18. Nonlinear programming for classification problems in machine learning

    NASA Astrophysics Data System (ADS)

    Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio

    2016-10-01

    We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.

  19. A new method for getting the three-dimensional curve of the groove of a spectacle frame by optical measuring

    NASA Astrophysics Data System (ADS)

    Rückwardt, M.; Göpfert, A.; Schnellhorn, M.; Correns, M.; Rosenberger, M.; Linß, G.

    2010-07-01

    Precise measuring of spectacle frames is an important field of quality assurance for opticians and their customers. Different supplier and a number of measuring methods are available but all of them are tactile ones. In this paper the possible employment of optical coordinate measuring machines is discussed for detecting the groove of a spectacle frame. The ambient conditions like deviation and measuring time are even multifaceted like quantity of quality characteristics and measuring objects itself and have to be tested. But the main challenge for an optical coordinate measuring machine is the blocked optical path, because the device under test is located behind an undercut. In this case it is necessary to deflect the beam of the machine for example with a rotating plane mirror. In the next step the difficulties of machine vision connecting to the spectacle frame are explained. Finally first results are given.

  20. 76 FR 35007 - Notice of Issuance of Final Determination Concerning the Country of Origin of Certain Office Chairs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-15

    ... learning environments. The merchandise at issue is the Herman Miller SAYL task chair and the SAYL side... exact size and shape requested by Herman Miller. The TPU mesh is placed in a custom-made machine, which...

  1. Design and optimize of 3-axis filament winding machine

    NASA Astrophysics Data System (ADS)

    Quanjin, Ma; Rejab, M. R. M.; Idris, M. S.; Bachtiar, B.; Siregar, J. P.; Harith, M. N.

    2017-10-01

    Filament winding technique is developed as the primary process for composite cylindrical structures fabrication at low cost. Fibres are wound on a rotating mandrel by a filament winding machine where resin impregnated fibres pass through a pay-out eye. This paper aims to develop and optimize a 3-axis, lightweight, practical, efficient, portable filament winding machine to satisfy the customer demand, which can fabricate pipes and round shape cylinders with resins. There are 3 main units on the 3-axis filament winding machine, which are the rotary unit, the delivery unit and control system unit. Comparison with previous existing filament winding machines in the factory, it has 3 degrees of freedom and can fabricate more complex shape specimens based on the mandrel shape and particular control system. The machine has been designed and fabricated on 3 axes movements with control system. The x-axis is for movement of the carriage, the y-axis is the rotation of mandrel and the z-axis is the movement of the pay-out eye. Cylindrical specimens with different dimensions and winding angles were produced. 3-axis automated filament winding machine has been successfully designed with simple control system.

  2. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  3. When technology became language: the origins of the linguistic conception of computer programming, 1950-1960.

    PubMed

    Nofre, David; Priestley, Mark; Alberts, Gerard

    2014-01-01

    Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.

  4. Development of sacrificial support fixture using deflection analysis

    NASA Astrophysics Data System (ADS)

    Ramteke, Ashwini M.; Ashtankar, Kishor M.

    2018-04-01

    Sacrificial support fixtures are the structures used to hold the part during machining while rotating the part about the fourth axis of CNC machining. In Four axis CNC machining part is held in a indexer which is rotated about the fourth axis of rotation. So using traditional fixturing devices to hold the part during machining such as jigs, v blocks and clamping plates needs a several set ups, manufacturing time which increase the cost associated with it. Since the part is rotated about the axis of rotation in four axis CNC machining so using traditional fixturing devices to hold the part while machining we need to reorient the fixture each time for particular orientation of part about the axis of rotation. So our proposed methodology of fixture design eliminates the cost associate with the complicated fixture design for customized parts which in turn reduces the time of manufacturing of the fixtures. But while designing the layout of the fixtures it is found out that the machining the part using four axis CNC machining the accurate machining of the part is directly proportional to the deflection produced in a part. So to machine an accurate part the deflection produced in a part should be minimum. We assume that the deflection produced in a part is a result of the deflection produced in a sacrificial support fixture while machining. So this paper provides the study of the deflection checking in a part machined using sacrificial support fixture by using FEA analysis.

  5. Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.

    ERIC Educational Resources Information Center

    Acker, Stephen R.

    1986-01-01

    This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)

  6. Teaching Machines to Think Fuzzy

    ERIC Educational Resources Information Center

    Technology Teacher, 2004

    2004-01-01

    Fuzzy logic programs for computers make them more human. Computers can then think through messy situations and make smart decisions. It makes computers able to control things the way people do. Fuzzy logic has been used to control subway trains, elevators, washing machines, microwave ovens, and cars. Pretty much all the human has to do is push one…

  7. Progress in computational toxicology.

    PubMed

    Ekins, Sean

    2014-01-01

    Computational methods have been widely applied to toxicology across pharmaceutical, consumer product and environmental fields over the past decade. Progress in computational toxicology is now reviewed. A literature review was performed on computational models for hepatotoxicity (e.g. for drug-induced liver injury (DILI)), cardiotoxicity, renal toxicity and genotoxicity. In addition various publications have been highlighted that use machine learning methods. Several computational toxicology model datasets from past publications were used to compare Bayesian and Support Vector Machine (SVM) learning methods. The increasing amounts of data for defined toxicology endpoints have enabled machine learning models that have been increasingly used for predictions. It is shown that across many different models Bayesian and SVM perform similarly based on cross validation data. Considerable progress has been made in computational toxicology in a decade in both model development and availability of larger scale or 'big data' models. The future efforts in toxicology data generation will likely provide us with hundreds of thousands of compounds that are readily accessible for machine learning models. These models will cover relevant chemistry space for pharmaceutical, consumer product and environmental applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Prediction of lung cancer patient survival via supervised machine learning classification techniques.

    PubMed

    Lynch, Chip M; Abdollahi, Behnaz; Fuqua, Joshua D; de Carlo, Alexandra R; Bartholomai, James A; Balgemann, Rayeanne N; van Berkel, Victor H; Frieboes, Hermann B

    2017-12-01

    Outcomes for cancer patients have been previously estimated by applying various machine learning techniques to large datasets such as the Surveillance, Epidemiology, and End Results (SEER) program database. In particular for lung cancer, it is not well understood which types of techniques would yield more predictive information, and which data attributes should be used in order to determine this information. In this study, a number of supervised learning techniques is applied to the SEER database to classify lung cancer patients in terms of survival, including linear regression, Decision Trees, Gradient Boosting Machines (GBM), Support Vector Machines (SVM), and a custom ensemble. Key data attributes in applying these methods include tumor grade, tumor size, gender, age, stage, and number of primaries, with the goal to enable comparison of predictive power between the various methods The prediction is treated like a continuous target, rather than a classification into categories, as a first step towards improving survival prediction. The results show that the predicted values agree with actual values for low to moderate survival times, which constitute the majority of the data. The best performing technique was the custom ensemble with a Root Mean Square Error (RMSE) value of 15.05. The most influential model within the custom ensemble was GBM, while Decision Trees may be inapplicable as it had too few discrete outputs. The results further show that among the five individual models generated, the most accurate was GBM with an RMSE value of 15.32. Although SVM underperformed with an RMSE value of 15.82, statistical analysis singles the SVM as the only model that generated a distinctive output. The results of the models are consistent with a classical Cox proportional hazards model used as a reference technique. We conclude that application of these supervised learning techniques to lung cancer data in the SEER database may be of use to estimate patient survival time with the ultimate goal to inform patient care decisions, and that the performance of these techniques with this particular dataset may be on par with that of classical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Experimental Realization of a Quantum Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng

    2015-04-01

    The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.

  10. Measurement framework for product service system performance of generator set distributors

    NASA Astrophysics Data System (ADS)

    Sofianti, Tanika D.

    2017-11-01

    Selling Generator Set (Genset) in B2B market, distributors assisted manufacturers to sell products. This is caused by the limited resources owned by the manufacturer for adding service elements. These service elements are needed to enhance the competitiveness of the generator sets. Some genset distributors often sell products together with supports to their customers. Industrial distributor develops services to meet the needs of the customer. Generator set distributors support machines and equipment produced by manufacturer. The services delivered by the distributors could enhance value obtained by the customers from the equipment. Services provided to customers in bidding process, ordering process of the equipment from the manufacturer, equipment delivery, installations, and the after sales stage. This paper promotes framework to measure Product Service System (PSS) of Generator Set distributors in delivering their products and services for the customers. The methodology of conducting this research is by adopting the perspective of the providers and customers and by taking into account the tangible and intangible products. This research leads to the idea of improvement of current Product Service System of a Genset distributor. This research needs further studies in more detailed measures and the implementation of measurement tools.

  11. Nozzles for Focusing Aerosol Particles

    DTIC Science & Technology

    2009-10-01

    Fabrication of the nozzle with the desired shape was accomplished using EDM technology. First, a copper tungsten electrode was turned on a CNC lathe . The...small (0.9-mm diameter). The external portions of the nozzles were machined in a more conventional manner using computer numerical control ( CNC ... lathes and milling machines running programs written by computer aided machining (CAM) software. The close tolerance of concentricity of the two

  12. Purchasing a Computer System for the Small Construction Company,

    DTIC Science & Technology

    1983-06-08

    October 1982. . . . . . . .. i. i...< ..- . ,. .-: i. i? ,. . .-- i * i . . . ., r77- 7 *,~ .- -- 36 IBM - International Business Machines...Corporation, "Small Systems Solutions: An Introduction to Business Computing," Pamphlet SC21-5205-0, Atlanta, Georgia, January 1979. International Business Machines...Corporation, "IBM System/34 Introduction," Pamphlet GC21-5153-5, File No. S34-00, Atlanta, Georgia, January 1979. International Business Machines

  13. Cloud-based opportunities in scientific computing: insights from processing Suomi National Polar-Orbiting Partnership (S-NPP) Direct Broadcast data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Hao, W.; Chettri, S.

    2013-12-01

    The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.

  14. Computer-aided design/computer-aided manufacturing skull base drill.

    PubMed

    Couldwell, William T; MacDonald, Joel D; Thomas, Charles L; Hansen, Bradley C; Lapalikar, Aniruddha; Thakkar, Bharat; Balaji, Alagar K

    2017-05-01

    The authors have developed a simple device for computer-aided design/computer-aided manufacturing (CAD-CAM) that uses an image-guided system to define a cutting tool path that is shared with a surgical machining system for drilling bone. Information from 2D images (obtained via CT and MRI) is transmitted to a processor that produces a 3D image. The processor generates code defining an optimized cutting tool path, which is sent to a surgical machining system that can drill the desired portion of bone. This tool has applications for bone removal in both cranial and spine neurosurgical approaches. Such applications have the potential to reduce surgical time and associated complications such as infection or blood loss. The device enables rapid removal of bone within 1 mm of vital structures. The validity of such a machining tool is exemplified in the rapid (< 3 minutes machining time) and accurate removal of bone for transtemporal (for example, translabyrinthine) approaches.

  15. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  16. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  17. Visualization Techniques for Computer Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Justin M; Steed, Chad A; Patton, Robert M

    2011-01-01

    Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less

  18. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  19. An introduction to quantum machine learning

    NASA Astrophysics Data System (ADS)

    Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco

    2015-04-01

    Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.

  20. Accurate Identification of Cancerlectins through Hybrid Machine Learning Technology.

    PubMed

    Zhang, Jieru; Ju, Ying; Lu, Huijuan; Xuan, Ping; Zou, Quan

    2016-01-01

    Cancerlectins are cancer-related proteins that function as lectins. They have been identified through computational identification techniques, but these techniques have sometimes failed to identify proteins because of sequence diversity among the cancerlectins. Advanced machine learning identification methods, such as support vector machine and basic sequence features (n-gram), have also been used to identify cancerlectins. In this study, various protein fingerprint features and advanced classifiers, including ensemble learning techniques, were utilized to identify this group of proteins. We improved the prediction accuracy of the original feature extraction methods and classification algorithms by more than 10% on average. Our work provides a basis for the computational identification of cancerlectins and reveals the power of hybrid machine learning techniques in computational proteomics.

  1. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  2. Machine learning: Trends, perspectives, and prospects.

    PubMed

    Jordan, M I; Mitchell, T M

    2015-07-17

    Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today's most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing. Copyright © 2015, American Association for the Advancement of Science.

  3. Toward Usable Interactive Analytics: Coupling Cognition and Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris; Chang, Remco

    Interactive analytics provide users a myriad of computational means to aid in extracting meaningful information from large and complex datasets. Much prior work focuses either on advancing the capabilities of machine-centric approaches by the data mining and machine learning communities, or human-driven methods by the visualization and CHI communities. However, these methods do not yet support a true human-machine symbiotic relationship where users and machines work together collaboratively and adapt to each other to advance an interactive analytic process. In this paper we discuss some of the inherent issues, outlining what we believe are the steps toward usable interactive analyticsmore » that will ultimately increase the effectiveness for both humans and computers to produce insights.« less

  4. Computer Associates International, CA-ACF2/VM Release 3.1

    DTIC Science & Technology

    1987-09-09

    Associates CA-ACF2/VM Bibliography International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Program Logic Manual...publication number LY20-0889 International Business Machines International Business Machines Corporation, IBM System/370 Principles of Operation...publication number GA22-7000 International Business Machines Corporation, IBM Virtual Machine/Directory Maintenance Installation and System Administrator’s

  5. Assessment of reliability of CAD-CAM tooth-colored implant custom abutments.

    PubMed

    Guilherme, Nuno Marques; Chung, Kwok-Hung; Flinn, Brian D; Zheng, Cheng; Raigrodski, Ariel J

    2016-08-01

    Information is lacking about the fatigue resistance of computer-aided design and computer-aided manufacturing (CAD-CAM) tooth-colored implant custom abutment materials. The purpose of this in vitro study was to investigate the reliability of different types of CAD-CAM tooth-colored implant custom abutments. Zirconia (Lava Plus), lithium disilicate (IPS e.max CAD), and resin-based composite (Lava Ultimate) abutments were fabricated using CAD-CAM technology and bonded to machined titanium-6 aluminum-4 vanadium (Ti-6Al-4V) alloy inserts for conical connection implants (NobelReplace Conical Connection RP 4.3×10 mm; Nobel Biocare). Three groups (n=19) were assessed: group ZR, CAD-CAM zirconia/Ti-6Al-4V bonded abutments; group RC, CAD-CAM resin-based composite/Ti-6Al-4V bonded abutments; and group LD, CAD-CAM lithium disilicate/Ti-6Al-4V bonded abutments. Fifty-seven implant abutments were secured to implants and embedded in autopolymerizing acrylic resin according to ISO standard 14801. Static failure load (n=5) and fatigue failure load (n=14) were tested. Weibull cumulative damage analysis was used to calculate step-stress reliability at 150-N and 200-N loads with 2-sided 90% confidence limits. Representative fractured specimens were examined using stereomicroscopy and scanning electron microscopy to observe fracture patterns. Weibull plots revealed β values of 2.59 for group ZR, 0.30 for group RC, and 0.58 for group LD, indicating a wear-out or cumulative fatigue pattern for group ZR and load as the failure accelerating factor for groups RC and LD. Fractographic observation disclosed that failures initiated in the interproximal area where the lingual tensile stresses meet the compressive facial stresses for the early failure specimens. Plastic deformation of titanium inserts with fracture was observed for zirconia abutments in fatigue resistance testing. Significantly higher reliability was found in group ZR, and no significant differences in reliability were determined between groups RC and LD. Differences were found in the failure characteristics of group ZR between static and fatigue loading. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  6. 17 CFR 190.07 - Calculation of allowed net equity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...

  7. 17 CFR 190.07 - Calculation of allowed net equity.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... computing, with respect to such account, the sum of: (i) The ledger balance; (ii) The open trade balance... purposes of this paragraph (b)(1), the open trade balance of a customer's account shall be computed by... ledger balance or open trade balance of any customer, exclude any security futures products, any gains or...

  8. Air Bearings Machined On Ultra Precision, Hydrostatic CNC-Lathe

    NASA Astrophysics Data System (ADS)

    Knol, Pierre H.; Szepesi, Denis; Deurwaarder, Jan M.

    1987-01-01

    Micromachining of precision elements requires an adequate machine concept to meet the high demand of surface finish, dimensional and shape accuracy. The Hembrug ultra precision lathes have been exclusively designed with hydrostatic principles for main spindle and guideways. This concept is to be explained with some major advantages of hydrostatics compared with aerostatics at universal micromachining applications. Hembrug has originally developed the conventional Mikroturn ultra precision facing lathes, for diamond turning of computer memory discs. This first generation of machines was followed by the advanced computer numerically controlled types for machining of complex precision workpieces. One of these parts, an aerostatic bearing component has been succesfully machined on the Super-Mikroturn CNC. A case study of airbearing machining confirms the statement that a good result of the micromachining does not depend on machine performance alone, but also on the technology applied.

  9. Additive Manufacturing in Offsite Repair of Consumer Electronics

    NASA Astrophysics Data System (ADS)

    Chekurov, Sergei; Salmi, Mika

    Spare parts for products that are at the end of their life cycles, but still under warranty, are logistically difficult because they are commonly not stored in the central warehouse. These uncommon spare parts occupy valuable space in smaller inventories and take a long time to be transported to the point of need, thus delaying the repair process. This paper proposes that storing the spare parts on a server and producing them with additive manufacturing (AM) on demand can shorten the repair cycle by simplifying the logistics. Introducing AM in the repair supply chain lowers the number of products that need to be reimbursed to the customer due to lengthy repairs, improves the repair statistics of the repair shops, and reduces the number of items that are held in stock. For this paper, the functionality of the concept was verified by reverse engineering a memory cover of a portable computer and laser sintering it from polyamide 12. The additively manufactured component fit well and the computer operated normally after the replacement. The current spare part supply chain model and models with AM machinery located at the repair shop, the centralized spare part provider, and the original equipment manufacturer were provided. The durations of the repair process in the models were compared by simulating two scenarios with the Monte Carlo method. As the biggest improvement, the model with the AM machine in the repair shop reduced the duration of the repair process from 14 days to three days. The result points to the conclusion that placing the machine as close to the need as possible is the best option, if there is enough demand. The spare parts currently compatible with AM are plastic components without strict surface roughness requirements, but more spare parts will become compatible with the development of AM.

  10. Patient specific ankle-foot orthoses using rapid prototyping

    PubMed Central

    2011-01-01

    Background Prefabricated orthotic devices are currently designed to fit a range of patients and therefore they do not provide individualized comfort and function. Custom-fit orthoses are superior to prefabricated orthotic devices from both of the above-mentioned standpoints. However, creating a custom-fit orthosis is a laborious and time-intensive manual process performed by skilled orthotists. Besides, adjustments made to both prefabricated and custom-fit orthoses are carried out in a qualitative manner. So both comfort and function can potentially suffer considerably. A computerized technique for fabricating patient-specific orthotic devices has the potential to provide excellent comfort and allow for changes in the standard design to meet the specific needs of each patient. Methods In this paper, 3D laser scanning is combined with rapid prototyping to create patient-specific orthoses. A novel process was engineered to utilize patient-specific surface data of the patient anatomy as a digital input, manipulate the surface data to an optimal form using Computer Aided Design (CAD) software, and then download the digital output from the CAD software to a rapid prototyping machine for fabrication. Results Two AFOs were rapidly prototyped to demonstrate the proposed process. Gait analysis data of a subject wearing the AFOs indicated that the rapid prototyped AFOs performed comparably to the prefabricated polypropylene design. Conclusions The rapidly prototyped orthoses fabricated in this study provided good fit of the subject's anatomy compared to a prefabricated AFO while delivering comparable function (i.e. mechanical effect on the biomechanics of gait). The rapid fabrication capability is of interest because it has potential for decreasing fabrication time and cost especially when a replacement of the orthosis is required. PMID:21226898

  11. Patient-Customized Drug Combination Prediction and Testing for T-cell Prolymphocytic Leukemia Patients.

    PubMed

    He, Liye; Tang, Jing; Andersson, Emma I; Timonen, Sanna; Koschmieder, Steffen; Wennerberg, Krister; Mustjoki, Satu; Aittokallio, Tero

    2018-05-01

    The molecular pathways that drive cancer progression and treatment resistance are highly redundant and variable between individual patients with the same cancer type. To tackle this complex rewiring of pathway cross-talk, personalized combination treatments targeting multiple cancer growth and survival pathways are required. Here we implemented a computational-experimental drug combination prediction and testing (DCPT) platform for efficient in silico prioritization and ex vivo testing in patient-derived samples to identify customized synergistic combinations for individual cancer patients. DCPT used drug-target interaction networks to traverse the massive combinatorial search spaces among 218 compounds (a total of 23,653 pairwise combinations) and identified cancer-selective synergies by using differential single-compound sensitivity profiles between patient cells and healthy controls, hence reducing the likelihood of toxic combination effects. A polypharmacology-based machine learning modeling and network visualization made use of baseline genomic and molecular profiles to guide patient-specific combination testing and clinical translation phases. Using T-cell prolymphocytic leukemia (T-PLL) as a first case study, we show how the DCPT platform successfully predicted distinct synergistic combinations for each of the three T-PLL patients, each presenting with different resistance patterns and synergy mechanisms. In total, 10 of 24 (42%) of selective combination predictions were experimentally confirmed to show synergy in patient-derived samples ex vivo The identified selective synergies among approved drugs, including tacrolimus and temsirolimus combined with BCL-2 inhibitor venetoclax, may offer novel drug repurposing opportunities for treating T-PLL. Significance: An integrated use of functional drug screening combined with genomic and molecular profiling enables patient-customized prediction and testing of drug combination synergies for T-PLL patients. Cancer Res; 78(9); 2407-18. ©2018 AACR . ©2018 American Association for Cancer Research.

  12. 78 FR 54796 - Research Expenditures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ... the design of the product, or even redesign components of the product, after production of the product... is engaged in the manufacture and sale of custom machines. U contracts to design and produce a... design. See paragraph (a)(2) of this section (relating to production costs). Example 4. Assume the same...

  13. RIP-REMOTE INTERACTIVE PARTICLE-TRACER

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.

    1994-01-01

    Remote Interactive Particle-tracing (RIP) is a distributed-graphics program which computes particle traces for computational fluid dynamics (CFD) solution data sets. A particle trace is a line which shows the path a massless particle in a fluid will take; it is a visual image of where the fluid is going. The program is able to compute and display particle traces at a speed of about one trace per second because it runs on two machines concurrently. The data used by the program is contained in two files. The solution file contains data on density, momentum and energy quantities of a flow field at discrete points in three-dimensional space, while the grid file contains the physical coordinates of each of the discrete points. RIP requires two computers. A local graphics workstation interfaces with the user for program control and graphics manipulation, and a remote machine interfaces with the solution data set and performs time-intensive computations. The program utilizes two machines in a distributed mode for two reasons. First, the data to be used by the program is usually generated on the supercomputer. RIP avoids having to convert and transfer the data, eliminating any memory limitations of the local machine. Second, as computing the particle traces can be computationally expensive, RIP utilizes the power of the supercomputer for this task. Although the remote site code was developed on a CRAY, it is possible to port this to any supercomputer class machine with a UNIX-like operating system. Integration of a velocity field from a starting physical location produces the particle trace. The remote machine computes the particle traces using the particle-tracing subroutines from PLOT3D/AMES, a CFD post-processing graphics program available from COSMIC (ARC-12779). These routines use a second-order predictor-corrector method to integrate the velocity field. Then the remote program sends graphics tokens to the local machine via a remote-graphics library. The local machine interprets the graphics tokens and draws the particle traces. The program is menu driven. RIP is implemented on the silicon graphics IRIS 3000 (local workstation) with an IRIX operating system and on the CRAY2 (remote station) with a UNICOS 1.0 or 2.0 operating system. The IRIS 4D can be used in place of the IRIS 3000. The program is written in C (67%) and FORTRAN 77 (43%) and has an IRIS memory requirement of 4 MB. The remote and local stations must use the same user ID. PLOT3D/AMES unformatted data sets are required for the remote machine. The program was developed in 1988.

  14. Some Uses of a Computer in Teaching, Training, and Testing: Student/Learner Self-Assessment Responding.

    ERIC Educational Resources Information Center

    Hunt, Darwin P.

    The use of systems theory as a conceptual framework is proposed as useful when considering computers as a machine component in teaching. Skinner's proposal that the label "computer" is inaccurate and counterproductive when used to refer to a machine being used for teaching is discussed. It is suggested that the alternative label…

  15. Computed Tomography Measuring Inside Machines

    NASA Technical Reports Server (NTRS)

    Wozniak, James F.; Scudder, Henry J.; Anders, Jeffrey E.

    1995-01-01

    Computed tomography applied to obtain approximate measurements of radial distances from centerline of turbopump to leading edges of diffuser vanes in turbopump. Use of computed tomography has significance beyond turbopump application: example of general concept of measuring internal dimensions of assembly of parts without having to perform time-consuming task of taking assembly apart and measuring internal parts on coordinate-measuring machine.

  16. Quantum Computing

    DTIC Science & Technology

    1998-04-01

    information representation and processing technology, although faster than the wheels and gears of the Charles Babbage computation machine, is still in...the same computational complexity class as the Babbage machine, with bits of information represented by entities which obey classical (non-quantum...nuclear double resonances Charles M Bowden and Jonathan P. Dowling Weapons Sciences Directorate, AMSMI-RD-WS-ST Missile Research, Development, and

  17. Man-Machine Interface System for Neuromuscular Training and Evaluation Based on EMG and MMG Signals

    PubMed Central

    de la Rosa, Ramon; Alonso, Alonso; Carrera, Albano; Durán, Ramon; Fernández, Patricia

    2010-01-01

    This paper presents the UVa-NTS (University of Valladolid Neuromuscular Training System), a multifunction and portable Neuromuscular Training System. The UVa-NTS is designed to analyze the voluntary control of severe neuromotor handicapped patients, their interactive response, and their adaptation to neuromuscular interface systems, such as neural prostheses or domotic applications. Thus, it is an excellent tool to evaluate the residual muscle capabilities in the handicapped. The UVa-NTS is composed of a custom signal conditioning front-end and a computer. The front-end electronics is described thoroughly as well as the overall features of the custom software implementation. The software system is composed of a set of graphical training tools and a processing core. The UVa-NTS works with two classes of neuromuscular signals: the classic myoelectric signals (MES) and, as a novelty, the myomechanic signals (MMS). In order to evaluate the performance of the processing core, a complete analysis has been done to classify its efficiency and to check that it fulfils with the real-time constraints. Tests were performed both with healthy and selected impaired subjects. The adaptation was achieved rapidly, applying a predefined protocol for the UVa-NTS set of training tools. Fine voluntary control was demonstrated to be reached with the myoelectric signals. And the UVa-NTS demonstrated to provide a satisfactory voluntary control when applying the myomechanic signals. PMID:22163515

  18. Man-machine interface system for neuromuscular training and evaluation based on EMG and MMG signals.

    PubMed

    de la Rosa, Ramon; Alonso, Alonso; Carrera, Albano; Durán, Ramon; Fernández, Patricia

    2010-01-01

    This paper presents the UVa-NTS (University of Valladolid Neuromuscular Training System), a multifunction and portable Neuromuscular Training System. The UVa-NTS is designed to analyze the voluntary control of severe neuromotor handicapped patients, their interactive response, and their adaptation to neuromuscular interface systems, such as neural prostheses or domotic applications. Thus, it is an excellent tool to evaluate the residual muscle capabilities in the handicapped. The UVa-NTS is composed of a custom signal conditioning front-end and a computer. The front-end electronics is described thoroughly as well as the overall features of the custom software implementation. The software system is composed of a set of graphical training tools and a processing core. The UVa-NTS works with two classes of neuromuscular signals: the classic myoelectric signals (MES) and, as a novelty, the myomechanic signals (MMS). In order to evaluate the performance of the processing core, a complete analysis has been done to classify its efficiency and to check that it fulfils with the real-time constraints. Tests were performed both with healthy and selected impaired subjects. The adaptation was achieved rapidly, applying a predefined protocol for the UVa-NTS set of training tools. Fine voluntary control was demonstrated to be reached with the myoelectric signals. And the UVa-NTS demonstrated to provide a satisfactory voluntary control when applying the myomechanic signals.

  19. Understanding Customer Dissatisfaction with Underutilized Distributed File Servers

    NASA Technical Reports Server (NTRS)

    Riedel, Erik; Gibson, Garth

    1996-01-01

    An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.

  20. An innovative method of ocular prosthesis fabrication by bio-CAD and rapid 3-D printing technology: A pilot study.

    PubMed

    Alam, Md Shahid; Sugavaneswaran, M; Arumaikkannu, G; Mukherjee, Bipasha

    2017-08-01

    Ocular prosthesis is either a readymade stock shell or custom made prosthesis (CMP). Presently, there is no other technology available, which is either superior or even comparable to the conventional CMP. The present study was designed to fabricate ocular prosthesis using computer aided design (CAD) and rapid manufacturing (RM) technology and to compare it with custom made prosthesis (CMP). The ocular prosthesis prepared by CAD was compared with conventional CMP in terms of time taken for fabrication, weight, cosmesis, comfort, and motility. Two eyes of two patients were included. Computerized tomography scan of wax model of socket was converted into three dimensional format using Materialize Interactive Medical Image Control System (MIMICS)software and further refined. This was given as an input to rapid manufacturing machine (Polyjet 3-D printer). The final painting on prototype was done by an ocularist. The average effective time required for fabrication of CAD prosthesis was 2.5 hours; and weight 2.9 grams. The same for CMP were 10 hours; and 4.4 grams. CAD prosthesis was more comfortable for both the patients. The study demonstrates the first ever attempt of fabricating a complete ocular prosthesis using CAD and rapid manufacturing and comparing it with conventional CMP. This prosthesis takes lesser time for fabrication, and is more comfortable. Studies with larger sample size will be required to further validate this technique.

  1. Walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The design and construction of the University of Maryland walking machine was completed during the 1989 to 1990 academic year. It was required that the machine be capable of completing a number of tasks including walking a straight line, turning to change direction, and manuevering over an obstacle such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear box and crank arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating this machine about this support. The machine can be controlled by using either a user-operated remote tether or the onboard computer for the execution of control commands. Absolute encoders are attached to all motors to provide the control computer with information regarding the status of the motors. Long and short range infrared sensors provide the computer with feedback information regarding the machine's position relative to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  2. ICAM (Integrated Computer Aided Manufacturing) Conceptual Design for Computer-Integrated Manufacturing. Volume 1. Project Overview and Technical Summary

    DTIC Science & Technology

    1984-06-29

    sheet metal, machined and composite parts and assembling the components into final pruJucts o Planning, evaluating, testing, inspecting and...Research showed that current programs were pursuing the design and demonstration of integrated centers for sheet metal, machining and composite ...determine any metal parts required and to schedule these requirements from the machining center. Figure 3-33, Planned Composite Production, shows

  3. Efficient forced vibration reanalysis method for rotating electric machines

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo

    2015-01-01

    Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.

  4. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  5. Application of XML to Journal Table Archiving

    NASA Astrophysics Data System (ADS)

    Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.

    1998-12-01

    The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html

  6. In Vitro Comparative Evaluation of Different Types of Impression Trays and Impression Materials on the Accuracy of Open Tray Implant Impressions: A Pilot Study

    PubMed Central

    Gupta, Sonam; Balakrishnan, Dhanasekar

    2017-01-01

    Purpose. For a precise fit of multiple implant framework, having an accurate definitive cast is imperative. The present study evaluated dimensional accuracy of master casts obtained using different impression trays and materials with open tray impression technique. Materials and Methods. A machined aluminum reference model with four parallel implant analogues was fabricated. Forty implant level impressions were made. Eight groups (n = 5) were tested using impression materials (polyether and vinylsiloxanether) and four types of impression trays, two being custom (self-cure acrylic and light cure acrylic) and two being stock (plastic and metal). The interimplant distances were measured on master casts using a coordinate measuring machine. The collected data was compared with a standard reference model and was statistically analyzed using two-way ANOVA. Results. Statistically significant difference (p < 0.05) was found between the two impression materials. However, the difference seen was small (36 μm) irrespective of the tray type used. No significant difference (p > 0.05) was observed between varied stock and custom trays. Conclusions. The polyether impression material proved to be more accurate than vinylsiloxanether impression material. The rigid nonperforated stock trays, both plastic and metal, could be an alternative for custom trays for multi-implant impressions when used with medium viscosity impression materials. PMID:28348595

  7. Machine learning applications in genetics and genomics.

    PubMed

    Libbrecht, Maxwell W; Noble, William Stafford

    2015-06-01

    The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets. Here, we provide an overview of machine learning applications for the analysis of genome sequencing data sets, including the annotation of sequence elements and epigenetic, proteomic or metabolomic data. We present considerations and recurrent challenges in the application of supervised, semi-supervised and unsupervised machine learning methods, as well as of generative and discriminative modelling approaches. We provide general guidelines to assist in the selection of these machine learning methods and their practical application for the analysis of genetic and genomic data sets.

  8. Quantum Machine Learning over Infinite Dimensions

    DOE PAGES

    Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George; ...

    2017-02-21

    Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less

  9. Quantum Machine Learning over Infinite Dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George

    Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less

  10. The next scientific revolution.

    PubMed

    Hey, Tony

    2010-11-01

    For decades, computer scientists have tried to teach computers to think like human experts. Until recently, most of those efforts have failed to come close to generating the creative insights and solutions that seem to come naturally to the best researchers, doctors, and engineers. But now, Tony Hey, a VP of Microsoft Research, says we're witnessing the dawn of a new generation of powerful computer tools that can "mash up" vast quantities of data from many sources, analyze them, and help produce revolutionary scientific discoveries. Hey and his colleagues call this new method of scientific exploration "machine learning." At Microsoft, a team has already used it to innovate a method of predicting with impressive accuracy whether a patient with congestive heart failure who is released from the hospital will be readmitted within 30 days. It was developed by directing a computer program to pore through hundreds of thousands of data points on 300,000 patients and "learn" the profiles of patients most likely to be rehospitalized. The economic impact of this prediction tool could be huge: If a hospital understands the likelihood that a patient will "bounce back," it can design programs to keep him stable and save thousands of dollars in health care costs. Similar efforts to uncover important correlations that could lead to scientific breakthroughs are under way in oceanography, conservation, and AIDS research. And in business, deep data exploration has the potential to unearth critical insights about customers, supply chains, advertising effectiveness, and more.

  11. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prowell, Stacy J; Symons, Christopher T

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  12. Are human beings humean robots?

    NASA Astrophysics Data System (ADS)

    Génova, Gonzalo; Quintanilla Navarro, Ignacio

    2018-01-01

    David Hume, the Scottish philosopher, conceives reason as the slave of the passions, which implies that human reason has predetermined objectives it cannot question. An essential element of an algorithm running on a computational machine (or Logical Computing Machine, as Alan Turing calls it) is its having a predetermined purpose: an algorithm cannot question its purpose, because it would cease to be an algorithm. Therefore, if self-determination is essential to human intelligence, then human beings are neither Humean beings, nor computational machines. We examine also some objections to the Turing Test as a model to understand human intelligence.

  13. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  14. A bio-inspired approach for the design of a multifunctional robotic end-effector customized for automated maintenance of a reconfigurable vibrating screen.

    PubMed

    Makinde, O A; Mpofu, K; Vrabic, R; Ramatsetse, B I

    2017-01-01

    The development of a robotic-driven maintenance solution capable of automatically maintaining reconfigurable vibrating screen (RVS) machine when utilized in dangerous and hazardous underground mining environment has called for the design of a multifunctional robotic end-effector capable of carrying out all the maintenance tasks on the RVS machine. In view of this, the paper presents a bio-inspired approach which unfolds the design of a novel multifunctional robotic end-effector embedded with mechanical and control mechanisms capable of automatically maintaining the RVS machine. To achieve this, therblig and morphological methodologies (which classifies the motions as well as the actions required by the robotic end-effector in carrying out RVS machine maintenance tasks), obtained from a detailed analogy of how human being (i.e. a machine maintenance manager) will carry out different maintenance tasks on the RVS machine, were used to obtain the maintenance objective functions or goals of the multifunctional robotic end-effector as well as the maintenance activity constraints of the RVS machine that must be adhered to by the multifunctional robotic end-effector during the machine maintenance. The results of the therblig and morphological analyses of five (5) different maintenance tasks capture and classify one hundred and thirty-four (134) repetitive motions and fifty-four (54) functions required in automating the maintenance tasks of the RVS machine. Based on these findings, a worm-gear mechanism embedded with fingers extruded with a hexagonal shaped heads capable of carrying out the "gripping and ungrasping" and "loosening and bolting" functions of the robotic end-effector and an electric cylinder actuator module capable of carrying out "unpinning and hammering" functions of the robotic end-effector were integrated together to produce the customized multifunctional robotic end-effector capable of automatically maintaining the RVS machine. The axial forces ([Formula: see text] and [Formula: see text]), normal forces ([Formula: see text]) and total load [Formula: see text] acting on the teeth of the worm-gear module of the multifunctional robotic end-effector during the gripping of worn-out or new RVS machine subsystems, which are 978.547, 1245.06 and 1016.406 N, respectively, were satisfactory. The nominal bending and torsional stresses acting on the shoulder of the socket module of the multifunctional robotic end-effector during the loosing and tightening of bolts, which are 1450.72 and 179.523 MPa, respectively, were satisfactory. The hammering and unpinning forces utilized by the electric cylinder actuator module of the multifunctional robotic end-effector during the unpinning and hammering of screen panel pins out of and into the screen panels were satisfactory.

  15. Handling imbalance data in churn prediction using combined SMOTE and RUS with bagging method

    NASA Astrophysics Data System (ADS)

    Pura Hartati, Eka; Adiwijaya; Arif Bijaksana, Moch

    2018-03-01

    Customer churn has become a significant problem and also a challenge for Telecommunication company such as PT. Telkom Indonesia. It is necessary to evaluate whether the big problems of churn customer and the company’s managements will make appropriate strategies to minimize the churn and retaining the customer. Churn Customer data which categorized churn Atas Permintaan Sendiri (APS) in this Company is an imbalance data, and this issue is one of the challenging tasks in machine learning. This study will investigate how is handling class imbalance in churn prediction using combined Synthetic Minority Over-Sampling (SMOTE) and Random Under-Sampling (RUS) with Bagging method for a better churn prediction performance’s result. The dataset that used is Broadband Internet data which is collected from Telkom Regional 6 Kalimantan. The research firstly using data preprocessing to balance the imbalanced dataset and also to select features by sampling technique SMOTE and RUS, and then building churn prediction model using Bagging methods and C4.5.

  16. Feasibility study of using statistical process control to customized quality assurance in proton therapy.

    PubMed

    Rah, Jeong-Eun; Shin, Dongho; Oh, Do Hoon; Kim, Tae Hyun; Kim, Gwe-Ya

    2014-09-01

    To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. The authors established a customized tolerance level of ±2% for D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors' analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.

  17. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  18. Micro- to Macroroughness of Additively Manufactured Titanium Implants in Terms of Coagulation and Contact Activation.

    PubMed

    Klingvall Ek, Rebecca; Hong, Jaan; Thor, Andreas; Bäckström, Mikael; Rännar, Lars-Erik

    This study aimed to evaluate how as-built electron beam melting (EBM) surface properties affect the onset of blood coagulation. The properties of EBM-manufactured implant surfaces for placement have, until now, remained largely unexplored in literature. Implants with conventional designs and custom-made implants have been manufactured using EBM technology and later placed into the human body. Many of the conventional implants used today, such as dental implants, display modified surfaces to optimize bone ingrowth, whereas custom-made implants, by and large, have machined surfaces. However, titanium in itself demonstrates good material properties for the purpose of bone ingrowth. Specimens manufactured using EBM were selected according to their surface roughness and process parameters. EBM-produced specimens, conventional machined titanium surfaces, as well as PVC surfaces for control were evaluated using the slide chamber model. A significant increase in activation was found, in all factors evaluated, between the machined samples and EBM-manufactured samples. The results show that EBM-manufactured implants with as-built surfaces augment the thrombogenic properties. EBM that uses Ti6Al4V powder appears to be a good manufacturing solution for load-bearing implants with bone anchorage. The as-built surfaces can be used "as is" for direct bone contact, although any surface treatment available for conventional implants can be performed on EBM-manufactured implants with a conventional design.

  19. 19 CFR 143.2 - Application.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 2 2011-04-01 2011-04-01 false Application. 143.2 Section 143.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... description of the computer hardware, communications and entry processing systems to be used and the estimated...

  20. 19 CFR 143.2 - Application.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 19 Customs Duties 2 2014-04-01 2014-04-01 false Application. 143.2 Section 143.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... description of the computer hardware, communications and entry processing systems to be used and the estimated...

  1. 19 CFR 143.2 - Application.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Application. 143.2 Section 143.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... description of the computer hardware, communications and entry processing systems to be used and the estimated...

  2. 19 CFR 143.2 - Application.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 2 2013-04-01 2013-04-01 false Application. 143.2 Section 143.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... description of the computer hardware, communications and entry processing systems to be used and the estimated...

  3. 19 CFR 143.2 - Application.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Application. 143.2 Section 143.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY... description of the computer hardware, communications and entry processing systems to be used and the estimated...

  4. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    PubMed

    Casey, M

    1996-08-15

    Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.

  5. Integrating robotic action with biologic perception: A brain-machine symbiosis theory

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Babak

    In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.

  6. Building machines that adapt and compute like brains.

    PubMed

    Kriegeskorte, Nikolaus; Mok, Robert M

    2017-01-01

    Building machines that learn and think like humans is essential not only for cognitive science, but also for computational neuroscience, whose ultimate goal is to understand how cognition is implemented in biological brains. A new cognitive computational neuroscience should build cognitive-level and neural-level models, understand their relationships, and test both types of models with both brain and behavioral data.

  7. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    PubMed

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  8. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    PubMed

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  9. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    DOE PAGES

    Aoun, Bachir

    2016-01-22

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less

  10. Fullrmc, a rigid body reverse monte carlo modeling package enabled with machine learning and artificial intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoun, Bachir

    Here, a new Reverse Monte Carlo (RMC) package ‘fullrmc’ for atomic or rigid body and molecular, amorphous or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython ,C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with amore » set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modelling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. Also fullrmc provides a unique way with almost no additional computational cost to recur a group’s selection, allowing the system to go out of local minimas by refining a group’s position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.« less

  11. Study of the scan uniformity from an i-CAT cone beam computed tomography dental imaging system.

    PubMed

    Bryant, J A; Drage, N A; Richmond, S

    2008-10-01

    As part of an ongoing programme to improve diagnosis and treatment planning relevant to implant placement, orthodontic treatment and dentomaxillofacial surgery, a study has been made of the spatial accuracy and density response of an i-CAT, a cone beam CT (CBCT) dental imaging system supplied by Imaging Sciences International Inc. Custom-made phantoms using acrylic sheet and water were used for measurements on spatial accuracy, density response and noise. The measurements were made over a period of several months on a clinical machine rather than on a machine dedicated to research. Measurements on a precision grid showed the spatial accuracy to be universally within the tolerance of +/-1 pixel. The density response and the noise in the data were found to depend strongly on the mass in the slice being scanned. The density response was subject to two effects. The first effect changes the whole slice uniformly and linearly depends on the total mass in the slice. The second effect exists when there is mass outside the field of view, dubbed the "exo-mass" effect. This effect lowers the measured CT number rapidly at the scan edge furthest from the exo-mass and raises it on the adjacent edge. The noise also depended quasi-linearly on the mass in the slice. Some general performance rules were drafted to describe these effects and a preliminary correction algorithm was constructed.

  12. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  13. Economical Emission-Line Mapping: ISM Properties of Nearby Protogalaxy Analogs

    NASA Astrophysics Data System (ADS)

    Monkiewicz, Jacqueline A.

    2017-01-01

    Optical emission line imaging can produce a wealth of information about the conditions of the interstellar medium, but a full set of custom emission-line filters for a professional-grade telescope camera can cost many thousands of dollars. A cheaper alternative is to use commercially-produced 2-inch narrow-band astrophotography filters. In order to use these standardized filters with professional-grade telescope cameras, custom filter mounts must be manufactured for each individual filter wheel. These custom filter adaptors are produced by 3-D printing rather than standard machining, which further lowers the total cost.I demonstrate the feasibility of this technique with H-alpha, H-beta, and [OIII] emission line mapping of the low metallicity star-forming galaxies IC10 and NGC 1569, taken with my astrophotography filter set on three different 2-meter class telescopes in Southern Arizona.

  14. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  15. 14 CFR 1214.107 - Postponement.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Space Shuttle Flights of Payloads for Non-U.S. Government, Reimbursable Customers § 1214.107...) A customer postponing the flight of a payload will pay a postponement fee to NASA. The fee will be computed as a percentage of the customer's Shuttle standard flight price and will be based on the table...

  16. 14 CFR 1214.107 - Postponement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Space Shuttle Flights of Payloads for Non-U.S. Government, Reimbursable Customers § 1214.107...) A customer postponing the flight of a payload will pay a postponement fee to NASA. The fee will be computed as a percentage of the customer's Shuttle standard flight price and will be based on the table...

  17. 14 CFR 1214.107 - Postponement.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Space Shuttle Flights of Payloads for Non-U.S. Government, Reimbursable Customers § 1214.107...) A customer postponing the flight of a payload will pay a postponement fee to NASA. The fee will be computed as a percentage of the customer's Shuttle standard flight price and will be based on the table...

  18. 14 CFR 1214.107 - Postponement.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Space Shuttle Flights of Payloads for Non-U.S. Government, Reimbursable Customers § 1214.107...) A customer postponing the flight of a payload will pay a postponement fee to NASA. The fee will be computed as a percentage of the customer's Shuttle standard flight price and will be based on the table...

  19. 19 CFR 181.12 - Maintenance and availability of records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 2 2011-04-01 2011-04-01 false Maintenance and availability of records. 181.12 Section 181.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... or in automated record storage devices (for example, magnetic discs and tapes) if associated computer...

  20. 19 CFR 181.12 - Maintenance and availability of records.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 19 Customs Duties 2 2013-04-01 2013-04-01 false Maintenance and availability of records. 181.12 Section 181.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... or in automated record storage devices (for example, magnetic discs and tapes) if associated computer...

  1. 19 CFR 181.12 - Maintenance and availability of records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 19 Customs Duties 2 2012-04-01 2012-04-01 false Maintenance and availability of records. 181.12 Section 181.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... or in automated record storage devices (for example, magnetic discs and tapes) if associated computer...

  2. 19 CFR 181.12 - Maintenance and availability of records.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Maintenance and availability of records. 181.12 Section 181.12 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... or in automated record storage devices (for example, magnetic discs and tapes) if associated computer...

  3. Technologies for developing an advanced intelligent ATM with self-defence capabilities

    NASA Astrophysics Data System (ADS)

    Sako, Hiroshi

    2010-01-01

    We have developed several technologies for protecting automated teller machines. These technologies are based mainly on pattern recognition and are used to implement various self-defence functions. They include (i) banknote recognition and information retrieval for preventing machines from accepting counterfeit and damaged banknotes and for retrieving information about detected counterfeits from a relational database, (ii) form processing and character recognition for preventing machines from accepting remittance forms without due dates and/or insufficient payment, (iii) person identification to prevent machines from transacting with non-customers, and (iv) object recognition to guard machines against foreign objects such as spy cams that might be surreptitiously attached to them and to protect users against someone attempting to peek at their user information such as their personal identification number. The person identification technology has been implemented in most ATMs in Japan, and field tests have demonstrated that the banknote recognition technology can recognise more then 200 types of banknote from 30 different countries. We are developing an "advanced intelligent ATM" that incorporates all of these technologies.

  4. Computerized Machine for Cutting Space Shuttle Thermal Tiles

    NASA Technical Reports Server (NTRS)

    Ramirez, Luis E.; Reuter, Lisa A.

    2009-01-01

    A report presents the concept of a machine aboard the space shuttle that would cut oversized thermal-tile blanks to precise sizes and shapes needed to replace tiles that were damaged or lost during ascent to orbit. The machine would include a computer-controlled jigsaw enclosed in a clear acrylic shell that would prevent escape of cutting debris. A vacuum motor would collect the debris into a reservoir and would hold a tile blank securely in place. A database stored in the computer would contain the unique shape and dimensions of every tile. Once a broken or missing tile was identified, its identification number would be entered into the computer, wherein the cutting pattern associated with that number would be retrieved from the database. A tile blank would be locked into a crib in the machine, the shell would be closed (proximity sensors would prevent activation of the machine while the shell was open), and a "cut" command would be sent from the computer. A blade would be moved around the crib like a plotter, cutting the tile to the required size and shape. Once the tile was cut, an astronaut would take a space walk for installation.

  5. Design and application of the falling vertical sorting machine

    NASA Astrophysics Data System (ADS)

    Zuo, Ping; Peng, Tao; Yang, Hai

    2018-04-01

    In the process of tobacco production, it is necessary to pack the smoke according to the needs of different customers. A sorting machine is used to pick up the cigarette at present, there is a launch channel machine, a percussible vertical machine, But in the sorting process, the rolling channel machine is different in terms of the quality of smoke and the frictional force. It is difficult to ensure the location and posture of the belt sorting line, which causes the manipulator to not grasp. The strike type vertical machine is difficult to control the parallelism of the smoke. Now this team has developed a falling sorting machine, which has solved the smoke drop of a cigarette to the transmission belt. There will not be no code, can satisfy most of the different types of smoke sorting and no damage to smoke status. The dynamic characteristics such as the angular error of the opening and closing mechanism are carried out by ADAMS software. The simulation results show that the maximum angular error is 0.016rad. Through the test of the device, the goods falling speed is 7031/hour, the good of the falling position error within 2mm, meet the crawl accuracy requirements of the palletizing robot.

  6. Tomography and generative training with quantum Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Kieferová, Mária; Wiebe, Nathan

    2017-12-01

    The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.

  7. 75 FR 28616 - Agilent Technologies, Inc.; Analysis of the Agreement Containing Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... equipment used to test cell phones and communications equipment, machines that determine the contents of... employ various analytical techniques to test samples of many types, are used by academic researchers... require the sensitivity provided by ICP-MS, and because many customers perform tests pursuant to...

  8. 16 CFR 423.6 - Textile wearing apparel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... easily found when the product is offered for sale to consumers. If the product is packaged, displayed, or folded so that customers cannot see or easily find the label, the care information must also appear on... washed by hand or machine. The label must also state a water temperature—in terms such as cold, warm, or...

  9. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2012-04-01 2012-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  10. 19 CFR 163.5 - Methods for storage of records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... standard business practice for storage of records include, but are not limited to, machine readable data... 19 Customs Duties 2 2011-04-01 2011-04-01 false Methods for storage of records. 163.5 Section 163... THE TREASURY (CONTINUED) RECORDKEEPING § 163.5 Methods for storage of records. (a) Original records...

  11. Custom blending of lamp phosphors

    NASA Technical Reports Server (NTRS)

    Klemm, R. E.

    1978-01-01

    Spectral output of fluorescent lamps can be precisely adjusted by using computer-assisted analysis for custom blending lamp phosphors. With technique, spectrum of main bank of lamps is measured and stored in computer memory along with emission characteristics of commonly available phosphors. Computer then calculates ratio of green and blue intensities for each phosphor according to manufacturer's specifications and plots them as coordinates on graph. Same ratios are calculated for measured spectrum. Once proper mix is determined, it is applied as coating to fluorescent tubing.

  12. Teaching Machines, Programming, Computers, and Instructional Technology: The Roots of Performance Technology.

    ERIC Educational Resources Information Center

    Deutsch, William

    1992-01-01

    Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)

  13. Using Support Vector Machine Ensembles for Target Audience Classification on Twitter

    PubMed Central

    Lo, Siaw Ling; Chiong, Raymond; Cornforth, David

    2015-01-01

    The vast amount and diversity of the content shared on social media can pose a challenge for any business wanting to use it to identify potential customers. In this paper, our aim is to investigate the use of both unsupervised and supervised learning methods for target audience classification on Twitter with minimal annotation efforts. Topic domains were automatically discovered from contents shared by followers of an account owner using Twitter Latent Dirichlet Allocation (LDA). A Support Vector Machine (SVM) ensemble was then trained using contents from different account owners of the various topic domains identified by Twitter LDA. Experimental results show that the methods presented are able to successfully identify a target audience with high accuracy. In addition, we show that using a statistical inference approach such as bootstrapping in over-sampling, instead of using random sampling, to construct training datasets can achieve a better classifier in an SVM ensemble. We conclude that such an ensemble system can take advantage of data diversity, which enables real-world applications for differentiating prospective customers from the general audience, leading to business advantage in the crowded social media space. PMID:25874768

  14. Using support vector machine ensembles for target audience classification on Twitter.

    PubMed

    Lo, Siaw Ling; Chiong, Raymond; Cornforth, David

    2015-01-01

    The vast amount and diversity of the content shared on social media can pose a challenge for any business wanting to use it to identify potential customers. In this paper, our aim is to investigate the use of both unsupervised and supervised learning methods for target audience classification on Twitter with minimal annotation efforts. Topic domains were automatically discovered from contents shared by followers of an account owner using Twitter Latent Dirichlet Allocation (LDA). A Support Vector Machine (SVM) ensemble was then trained using contents from different account owners of the various topic domains identified by Twitter LDA. Experimental results show that the methods presented are able to successfully identify a target audience with high accuracy. In addition, we show that using a statistical inference approach such as bootstrapping in over-sampling, instead of using random sampling, to construct training datasets can achieve a better classifier in an SVM ensemble. We conclude that such an ensemble system can take advantage of data diversity, which enables real-world applications for differentiating prospective customers from the general audience, leading to business advantage in the crowded social media space.

  15. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  16. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  17. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  18. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  19. Personal Decision Factors Considered by Information Technology Executives: Their Impacts on Business Intentions and Consequent Cloud Computing Services Adoption Rates

    ERIC Educational Resources Information Center

    Smith, Marcus L., Jr.

    2016-01-01

    During its infancy, the cloud computing industry was the province largely of small and medium-sized business customers. Despite their size, these companies required a professionally run, yet economical information technology (IT) operation. These customers used a total value strategy whereby they avoided paying for essential, yet underutilized,…

  20. Customer Service: What I Learned When I Bought My New Computer

    ERIC Educational Resources Information Center

    Neugebauer, Roger

    2009-01-01

    In this article, the author relates that similar to the time he bought his new computer, he had the opportunity to experience poor customer service when he and his wife signed their child up for a preschool program. They learned that the staff at the preschool didn't want parents looking over their shoulders and questioning their techniques. He…

  1. Exploring Customization in Higher Education: An Experiment in Leveraging Computer Spreadsheet Technology to Deliver Highly Individualized Online Instruction to Undergraduate Business Students

    ERIC Educational Resources Information Center

    Kunzler, Jayson S.

    2012-01-01

    This dissertation describes a research study designed to explore whether customization of online instruction results in improved learning in a college business statistics course. The study involved utilizing computer spreadsheet technology to develop an intelligent tutoring system (ITS) designed to: a) collect and monitor individual real-time…

  2. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  3. Measurement value analysis overall equipment effectiveness (OEE) packaging process in line 2 (Case Study of PT. MBI Tbk)

    NASA Astrophysics Data System (ADS)

    Rimawan, Erry; Kholil, Muhammad; Hendri

    2018-03-01

    PT. MBI Tbk is engaged in the manufacture of beverage industry, where the company’s production is based on the magnitude of customer demand that is marketing offices that had been scattered in various regions of Indonesia. In the packaging process steps in PT.MBI through the line 3 lines including racking, canning line, bottling line. In the canning process to existing packing on Line 2 (canning line), there are some machines that are used continuously, among other Depalletizer machine, filler machine, can seamer machine, pasteurizer machine, machine FLD, Wrap Around engine, engine Shrink Wrap. Due to the large demand from customers that is relentless, therefore the calculation of overall equipment effectiveness (OEE) as a whole on line 2 (canning line) is needed in order to make improvements continuously (Continuous Improvement) at line 2 (canning line). This study aims to determine the value of overall equipment effectiveness (OEE) and Losses of the most influential of the big six OEE Losses focused on equipment or machinery as a whole into a single unit that is on the line 2, which will then be known root cause of the losses that occur from the research over the field. From the calculation of overall equipment effectiveness (OEE), there are two ratios are still poor and under world-class standards, while the ratio of the availability of 88.85% of the world-class standards by 90% and the performance ratio of 78.51% of the standard world class by 95%, whereas for quality ratio has entered the world-class standard that is equal to 99.90%. Thus the value of OEE on Line 2 line is below world class standards. In this study there were only five losses, which can be identified, and while the losses were very influential, namely the Speed Reduced Losses, losses, these losses accounted for the largest percentage of the value of the rate of 19.12%, of the results of this study losses occurred due to poor surveillance systems (less good) that causes the employee or operator does not perform the work in accordance with a predetermined.

  4. A Field Programmable Gate Array-Based Reconfigurable Smart-Sensor Network for Wireless Monitoring of New Generation Computer Numerically Controlled Machines

    PubMed Central

    Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602

  5. A field programmable gate array-based reconfigurable smart-sensor network for wireless monitoring of new generation computer numerically controlled machines.

    PubMed

    Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; Romero-Troncoso, Rene de Jesus

    2010-01-01

    Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node.

  6. A Machine LearningFramework to Forecast Wave Conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; James, S. C.; O'Donncha, F.

    2017-12-01

    Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.

  7. Configurable product design considering the transition of multi-hierarchical models

    NASA Astrophysics Data System (ADS)

    Ren, Bin; Qiu, Lemiao; Zhang, Shuyou; Tan, Jianrong; Cheng, Jin

    2013-03-01

    The current research of configurable product design mainly focuses on how to convert a predefined set of components into a valid set of product structures. With the scale and complexity of configurable products increasing, the interdependencies between customer demands and product structures grow up as well. The result is that existing product structures fails to satisfy the individual customer requirements and hence product variants are needed. This paper is aimed to build a bridge between customer demands and product structures in order to make demand-driven fast response design feasible. First of all, multi-hierarchical models of configurable product design are established with customer demand model, technical requirement model and product structure model. Then, the transition of multi-hierarchical models among customer demand model, technical requirement model and product structure model is solved with fuzzy analytic hierarchy process (FAHP) and the algorithm of multi-level matching. Finally, optimal structure according to the customer demands is obtained with the calculation of Euclidean distance and similarity of some cases. In practice, the configuration design of a clamping unit of injection molding machine successfully performs an optimal search strategy for the product variants with reasonable satisfaction to individual customer demands. The proposed method can automatically generate a configuration design with better alternatives for each product structures, and shorten the time of finding the configuration of a product.

  8. Bond strengths of custom cast and prefabricated posts luted with two cements.

    PubMed

    Aleisa, Khalil Ibrahim

    2011-02-01

    This in vitro study evaluated the bond strength of custom cast and prefabricated posts luted with resin or zinc phosphate cements into unobturated canals of extracted teeth. Forty-eight custom cast and prefabricated posts were placed into extracted single-rooted human teeth. Post-cavity preparation was 1.5 mm in diameter and 10 mm in depth. Specimens were randomly divided into 4 groups of 12 each. Two of the groups were then luted with resin cement, while the other two groups were luted with zinc phosphate cement. A pull-out bond strength evaluation was performed using a universal testing machine. The Kolmogorov-Smirnov test was used to prove normal distribution. Data were statistically analyzed using two-way ANOVA and the Student t test (alpha = .05). For both luting agents, the prefabricated posts group exhibited significantly less bond strength than the custom cast posts group (P = .0001). There were statistically significant differences in mean bond strength for the prefabricated posts group luted with resin cement vs the group cemented with zinc phosphate cement (P = .002). There was no significant difference between the mean bond strength values of custom cast posts luted with resin cement or zinc phosphate cement. Custom cast posts showed significantly greater bond strength than prefabricated posts when luted with either resin or zinc phosphate cements. The type of cement had less significance on the retention of custom cast posts.

  9. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  10. Computer-Assisted Virtual Planning for Surgical Guide Manufacturing and Internal Distractor Adaptation in the Management of Midface Hypoplasia in Cleft Patients.

    PubMed

    Scolozzi, Paolo; Herzog, Georges

    2017-07-01

    We are reporting the treatment of severe maxillary hypoplasia in two patients with unilateral cleft lip and palate by using a specific approach combining the Le Fort I distraction osteogenesis technique coupled with computer-aided design/computer-aided manufacturing customized surgical guides and internal distractors based on virtual computational planning. This technology allows for the transfer of the virtual planned reconstruction to the operating room by using custom patient-specific implants, surgical splints, surgical cutting guides, and surgical guides to plate or distractor adaptation.

  11. Computed Tomography For Internal Inspection Of Castings

    NASA Technical Reports Server (NTRS)

    Hanna, Timothy L.

    1995-01-01

    Computed tomography used to detect internal flaws in metal castings before machining and otherwise processing them into finished parts. Saves time and money otherwise wasted on machining and other processing of castings eventually rejected because of internal defects. Knowledge of internal defects gained by use of computed tomography also provides guidance for changes in foundry techniques, procedures, and equipment to minimize defects and reduce costs.

  12. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  13. The quality improvement customers didn't want.

    PubMed

    Iacobucci, D

    1996-01-01

    Is investing in new technology always the right choice for a company and its customers? Allan Moulter, the CEO of Quality Care, isn't sure he wants to invest in the computerized reception system that consultant Jack Zadow has outlined for him. But in this HBR case study, the argument Zadow makes is impossible to ignore. Quality Care's rivals have invested in similar systems or are planning to do so. The new system promises to take care of routine busywork, freeing staff up for other interactions with patients. It seems as if the competition hasn't even cut staff and is counting on increased customer retention to pay for the investment. And yet, Quality Care's surveys of its own customers show that they prefer the human touch when checking in. How would customers feel if the first ¿person¿ they met when they came in the door turned out to be a machine? Moulter prides himself on his responsiveness to customers. And with 86% of Quality Care's customers either satisfied or completely satisfied, aren't things fine as they are? Has Moulter considered all the facets of his predicament? How will Quality Care's staff be affected by a decision one way or another? What about the costs of upgrading the system? Can Quality Care maintain its standing without going high-tech? Would customers rebel when confronted with the proposed reception area or would they appreciate the increased efficiency? Six experts weigh the costs and benefits of technology in a service industry.

  14. Comparative Study of Vibration Condition Indicators for Detecting Cracks in Spur Gears

    NASA Technical Reports Server (NTRS)

    Nanadic, Nenad; Ardis, Paul; Hood, Adrian; Thurston, Michael; Ghoshal, Anindya; Lewicki, David

    2013-01-01

    This paper reports the results of an empirical study on the tooth breakage failure mode in spur gears. Of four dominant gear failure modes (breakage, wear, pitting, and scoring), tooth breakage is the most precipitous and often leads to catastrophic failures. The cracks were initiated using a fatigue tester and a custom-designed single-tooth bending fixture to simulate over-load conditions, instead of traditional notching using wire electrical discharge machining (EDM). The cracks were then propagated on a dynamometer. The ground truth of damage level during crack propagation was monitored with crack-propagation sensors. Ten crack propagations have been performed to compare the existing condition indicators (CIs) with respect to their: ability to detect a crack, ability to assess the damage, and sensitivity to sensor placement. Of more than thirty computed CIs, this paper compares five commonly used: raw RMS, FM0, NA4, raw kurtosis, and NP4. The performance of combined CIs was also investigated, using linear, logistic, and boosted regression trees based feature fusion.

  15. A review of rapid prototyping techniques for tissue engineering purposes.

    PubMed

    Peltola, Sanna M; Melchels, Ferry P W; Grijpma, Dirk W; Kellomäki, Minna

    2008-01-01

    Rapid prototyping (RP) is a common name for several techniques, which read in data from computer-aided design (CAD) drawings and manufacture automatically three-dimensional objects layer-by-layer according to the virtual design. The utilization of RP in tissue engineering enables the production of three-dimensional scaffolds with complex geometries and very fine structures. Adding micro- and nanometer details into the scaffolds improves the mechanical properties of the scaffold and ensures better cell adhesion to the scaffold surface. Thus, tissue engineering constructs can be customized according to the data acquired from the medical scans to match the each patient's individual needs. In addition RP enables the control of the scaffold porosity making it possible to fabricate applications with desired structural integrity. Unfortunately, every RP process has its own unique disadvantages in building tissue engineering scaffolds. Hence, the future research should be focused on the development of RP machines designed specifically for fabrication of tissue engineering scaffolds, although RP methods already can serve as a link between tissue and engineering.

  16. The New BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, Antonio

    2003-06-02

    The BaBar experiment is characterized by extremely high luminosity, a complex detector, and a huge data volume, with increasing requirements each year. To fulfill these requirements a new control system has been designed and developed for the offline data reconstruction system. The new control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is activelymore » distributed, enforces the separation between different processing tiers by using different naming domains, and glues them together by dedicated brokers. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes this new control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 12 farms.« less

  17. Process description language: an experiment in robust programming for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Spooner, Natalie R.; Creak, G. Alan

    1998-10-01

    Maintaining stable, robust, and consistent software is difficult in face of the increasing rate of change of customers' preferences, materials, manufacturing techniques, computer equipment, and other characteristic features of manufacturing systems. It is argued that software is commonly difficult to keep up to date because many of the implications of these changing features on software details are obscure. A possible solution is to use a software generation system in which the transformation of system properties into system software is made explicit. The proposed generation system stores the system properties, such as machine properties, product properties and information on manufacturing techniques, in databases. As a result this information, on which system control is based, can also be made available to other programs. In particular, artificial intelligence programs such as fault diagnosis programs, can benefit from using the same information as the control system, rather than a separate database which must be developed and maintained separately to ensure consistency. Experience in developing a simplified model of such a system is presented.

  18. The Security of Machine Learning

    DTIC Science & Technology

    2008-04-24

    Machine learning has become a fundamental tool for computer security, since it can rapidly evolve to changing and complex situations. That...adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine ...We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing

  19. [AERA. Dream machines and computing practices at the Mathematical Center].

    PubMed

    Alberts, Gerard; De Beer, Huub T

    2008-01-01

    Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.

  20. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer

    Jones, Terry R.; Koenig, Gregory A.

    2014-04-30

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  1. Rosen's (M,R) system as an X-machine.

    PubMed

    Palmer, Michael L; Williams, Richard A; Gatherer, Derek

    2016-11-07

    Robert Rosen's (M,R) system is an abstract biological network architecture that is allegedly both irreducible to sub-models of its component states and non-computable on a Turing machine. (M,R) stands as an obstacle to both reductionist and mechanistic presentations of systems biology, principally due to its self-referential structure. If (M,R) has the properties claimed for it, computational systems biology will not be possible, or at best will be a science of approximate simulations rather than accurate models. Several attempts have been made, at both empirical and theoretical levels, to disprove this assertion by instantiating (M,R) in software architectures. So far, these efforts have been inconclusive. In this paper, we attempt to demonstrate why - by showing how both finite state machine and stream X-machine formal architectures fail to capture the self-referential requirements of (M,R). We then show that a solution may be found in communicating X-machines, which remove self-reference using parallel computation, and then synthesise such machine architectures with object-orientation to create a formal basis for future software instantiations of (M,R) systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. University of Maryland walking robot: A design project for undergraduate students

    NASA Technical Reports Server (NTRS)

    Olsen, Bob; Bielec, Jim; Hartsig, Dave; Oliva, Mani; Grotheer, Phil; Hekmat, Morad; Russell, David; Tavakoli, Hossein; Young, Gary; Nave, Tom

    1990-01-01

    The design and construction required that the walking robot machine be capable of completing a number of tasks including walking in a straight line, turning to change direction, and maneuvering over an obstable such as a set of stairs. The machine consists of two sets of four telescoping legs that alternately support the entire structure. A gear-box and crank-arm assembly is connected to the leg sets to provide the power required for the translational motion of the machine. By retracting all eight legs, the robot comes to rest on a central Bigfoot support. Turning is accomplished by rotating the machine about this support. The machine can be controlled by using either a user operated remote tether or the on-board computer for the execution of control commands. Absolute encoders are attached to all motors (leg, main drive, and Bigfoot) to provide the control computer with information regarding the status of the motors (up-down motion, forward or reverse rotation). Long and short range infrared sensors provide the computer with feedback information regarding the machine's relative position to a series of stripes and reflectors. These infrared sensors simulate how the robot might sense and gain information about the environment of Mars.

  3. Advances in Machine Learning and Data Mining for Astronomy

    NASA Astrophysics Data System (ADS)

    Way, Michael J.; Scargle, Jeffrey D.; Ali, Kamal M.; Srivastava, Ashok N.

    2012-03-01

    Advances in Machine Learning and Data Mining for Astronomy documents numerous successful collaborations among computer scientists, statisticians, and astronomers who illustrate the application of state-of-the-art machine learning and data mining techniques in astronomy. Due to the massive amount and complexity of data in most scientific disciplines, the material discussed in this text transcends traditional boundaries between various areas in the sciences and computer science. The book's introductory part provides context to issues in the astronomical sciences that are also important to health, social, and physical sciences, particularly probabilistic and statistical aspects of classification and cluster analysis. The next part describes a number of astrophysics case studies that leverage a range of machine learning and data mining technologies. In the last part, developers of algorithms and practitioners of machine learning and data mining show how these tools and techniques are used in astronomical applications. With contributions from leading astronomers and computer scientists, this book is a practical guide to many of the most important developments in machine learning, data mining, and statistics. It explores how these advances can solve current and future problems in astronomy and looks at how they could lead to the creation of entirely new algorithms within the data mining community.

  4. Big Data: Next-Generation Machines for Big Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hack, James J.; Papka, Michael E.

    Addressing the scientific grand challenges identified by the US Department of Energy’s (DOE’s) Office of Science’s programs alone demands a total leadership-class computing capability of 150 to 400 Pflops by the end of this decade. The successors to three of the DOE’s most powerful leadership-class machines are set to arrive in 2017 and 2018—the products of the Collaboration Oak Ridge Argonne Livermore (CORAL) initiative, a national laboratory–industry design/build approach to engineering nextgeneration petascale computers for grand challenge science. These mission-critical machines will enable discoveries in key scientific fields such as energy, biotechnology, nanotechnology, materials science, and high-performance computing, and servemore » as a milestone on the path to deploying exascale computing capabilities.« less

  5. Machine learning based Intelligent cognitive network using fog computing

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Li, Lun; Chen, Genshe; Shen, Dan; Pham, Khanh; Blasch, Erik

    2017-05-01

    In this paper, a Cognitive Radio Network (CRN) based on artificial intelligence is proposed to distribute the limited radio spectrum resources more efficiently. The CRN framework can analyze the time-sensitive signal data close to the signal source using fog computing with different types of machine learning techniques. Depending on the computational capabilities of the fog nodes, different features and machine learning techniques are chosen to optimize spectrum allocation. Also, the computing nodes send the periodic signal summary which is much smaller than the original signal to the cloud so that the overall system spectrum source allocation strategies are dynamically updated. Applying fog computing, the system is more adaptive to the local environment and robust to spectrum changes. As most of the signal data is processed at the fog level, it further strengthens the system security by reducing the communication burden of the communications network.

  6. Multiaxis Computer Numerical Control Internship Report

    ERIC Educational Resources Information Center

    Rouse, Sharon M.

    2012-01-01

    (Purpose) The purpose of this paper was to examine the issues associated with bringing new technology into the classroom, in particular, the vocational/technical classroom. (Methodology) A new Haas 5 axis vertical Computer Numerical Control machining center was purchased to update the CNC machining curriculum at a community college and the process…

  7. Analysis in Motion Initiative – Human Machine Intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie

    As computers and machines become more pervasive in our everyday lives, we are looking for ways for humans and machines to work more intelligently together. How can we help machines understand their users so the team can do smarter things together? The Analysis in Motion Initiative is advancing the science of human machine intelligence — creating human-machine teams that work better together to make correct, useful, and timely interpretations of data.

  8. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  9. Cloud Computing

    DTIC Science & Technology

    2009-11-12

    Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of Capability Based on access Based...Mellon University Software -as-a- Service ( SaaS ) Application-specific capabilities, e.g., service that provides customer management Allows organizations...as a Service ( SaaS ) Model of software deployment in which a provider licenses an application to customers for use as a service on

  10. Development of Semi-Automatic Lathe by using Intelligent Soft Computing Technique

    NASA Astrophysics Data System (ADS)

    Sakthi, S.; Niresh, J.; Vignesh, K.; Anand Raj, G.

    2018-03-01

    This paper discusses the enhancement of conventional lathe machine to semi-automated lathe machine by implementing a soft computing method. In the present scenario, lathe machine plays a vital role in the engineering division of manufacturing industry. While the manual lathe machines are economical, the accuracy and efficiency are not up to the mark. On the other hand, CNC machine provide the desired accuracy and efficiency, but requires a huge capital. In order to over come this situation, a semi-automated approach towards the conventional lathe machine is developed by employing stepper motors to the horizontal and vertical drive, that can be controlled by Arduino UNO -microcontroller. Based on the input parameters of the lathe operation the arduino coding is been generated and transferred to the UNO board. Thus upgrading from manual to semi-automatic lathe machines can significantly increase the accuracy and efficiency while, at the same time, keeping a check on investment cost and consequently provide a much needed escalation to the manufacturing industry.

  11. Comparative adoption of cone beam computed tomography and panoramic radiography machines across Australia.

    PubMed

    Zhang, A; Critchley, S; Monsour, P A

    2016-12-01

    The aim of the present study was to assess the current adoption of cone beam computed tomography (CBCT) and panoramic radiography (PR) machines across Australia. Information regarding registered CBCT and PR machines was obtained from radiation regulators across Australia. The number of X-ray machines was correlated with the population size, the number of dentists, and the gross state product (GSP) per capita, to determine the best fitting regression model(s). In 2014, there were 232 CBCT and 1681 PR machines registered in Australia. Based on absolute counts, Queensland had the largest number of CBCT and PR machines whereas the Northern Territory had the smallest number. However, when based on accessibility in terms of the population size and the number of dentists, the Australian Capital Territory had the most CBCT machines and Western Australia had the most PR machines. The number of X-ray machines correlated strongly with both the population size and the number of dentists, but not with the GSP per capita. In 2014, the ratio of PR to CBCT machines was approximately 7:1. Projected increases in either the population size or the number of dentists could positively impact on the adoption of PR and CBCT machines in Australia. © 2016 Australian Dental Association.

  12. Programming in HAL/S

    NASA Technical Reports Server (NTRS)

    Ryer, M. J.

    1978-01-01

    HAL/S is a computer programming language; it is a representation for algorithms which can be interpreted by either a person or a computer. HAL/S compilers transform blocks of HAL/S code into machine language which can then be directly executed by a computer. When the machine language is executed, the algorithm specified by the HAL/S code (source) is performed. This document describes how to read and write HAL/S source.

  13. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  14. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  15. Single instruction computer architecture and its application in image processing

    NASA Astrophysics Data System (ADS)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  16. Application of Metamorphic Testing to Supervised Classifiers

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2010-01-01

    Many applications in the field of scientific computing - such as computational biology, computational linguistics, and others - depend on Machine Learning algorithms to provide important core functionality to support solutions in the particular problem domains. However, it is difficult to test such applications because often there is no “test oracle” to indicate what the correct output should be for arbitrary input. To help address the quality of such software, in this paper we present a technique for testing the implementations of supervised machine learning classification algorithms on which such scientific computing software depends. Our technique is based on an approach called “metamorphic testing”, which has been shown to be effective in such cases. More importantly, we demonstrate that our technique not only serves the purpose of verification, but also can be applied in validation. In addition to presenting our technique, we describe a case study we performed on a real-world machine learning application framework, and discuss how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also discuss how our findings can be of use to other areas outside scientific computing, as well. PMID:21243103

  17. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  18. Lubricant Coating Process

    NASA Technical Reports Server (NTRS)

    1989-01-01

    "Peen Plating," a NASA developed process for applying molybdenum disulfide, is the key element of Techniblast Co.'s SURFGUARD process for applying high strength solid lubricants. The process requires two machines -- one for cleaning and one for coating. The cleaning step allows the coating to be bonded directly to the substrate to provide a better "anchor." The coating machine applies a half a micron thick coating. Then, a blast gun, using various pressures to vary peening intensities for different applications, fires high velocity "media" -- peening hammers -- ranging from plastic pellets to steel shot. Techniblast was assisted by Rural Enterprises, Inc. Coating service can be performed at either Techniblast's or a customer's facility.

  19. Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors

    NASA Astrophysics Data System (ADS)

    Holmes, C. S.; Headley, M.; Hart, P. W.

    2017-08-01

    Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.

  20. Finite element computation on nearest neighbor connected machines

    NASA Technical Reports Server (NTRS)

    Mcaulay, A. D.

    1984-01-01

    Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.

  1. A mechanical Turing machine: blueprint for a biomolecular computer

    PubMed Central

    Shapiro, Ehud

    2012-01-01

    We describe a working mechanical device that embodies the theoretical computing machine of Alan Turing, and as such is a universal programmable computer. The device operates on three-dimensional building blocks by applying mechanical analogues of polymer elongation, cleavage and ligation, movement along a polymer, and control by molecular recognition unleashing allosteric conformational changes. Logically, the device is not more complicated than biomolecular machines of the living cell, and all its operations are part of the standard repertoire of these machines; hence, a biomolecular embodiment of the device is not infeasible. If implemented, such a biomolecular device may operate in vivo, interacting with its biochemical environment in a program-controlled manner. In particular, it may ‘compute’ synthetic biopolymers and release them into its environment in response to input from the environment, a capability that may have broad pharmaceutical and biological applications. PMID:22649583

  2. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    The implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are: the MPP, an SIMD machine with 16K bit serial processors; FLEX/32, an MIMD machine with 20 processors; and CRAY/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the FLEX/32 and CRAY/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  3. Implementation of an ADI method on parallel computers

    NASA Technical Reports Server (NTRS)

    Fatoohi, Raad A.; Grosch, Chester E.

    1987-01-01

    In this paper the implementation of an ADI method for solving the diffusion equation on three parallel/vector computers is discussed. The computers were chosen so as to encompass a variety of architectures. They are the MPP, an SIMD machine with 16-Kbit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2, an MIMD machine with four vector processors. The Gaussian elimination algorithm is used to solve a set of tridiagonal systems on the Flex/32 and Cray/2 while the cyclic elimination algorithm is used to solve these systems on the MPP. The implementation of the method is discussed in relation to these architectures and measures of the performance on each machine are given. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally conclusions are presented.

  4. Machine learning methods in chemoinformatics

    PubMed Central

    Mitchell, John B O

    2014-01-01

    Machine learning algorithms are generally developed in computer science or adjacent disciplines and find their way into chemical modeling by a process of diffusion. Though particular machine learning methods are popular in chemoinformatics and quantitative structure–activity relationships (QSAR), many others exist in the technical literature. This discussion is methods-based and focused on some algorithms that chemoinformatics researchers frequently use. It makes no claim to be exhaustive. We concentrate on methods for supervised learning, predicting the unknown property values of a test set of instances, usually molecules, based on the known values for a training set. Particularly relevant approaches include Artificial Neural Networks, Random Forest, Support Vector Machine, k-Nearest Neighbors and naïve Bayes classifiers. WIREs Comput Mol Sci 2014, 4:468–481. How to cite this article: WIREs Comput Mol Sci 2014, 4:468–481. doi:10.1002/wcms.1183 PMID:25285160

  5. The Impact of Machine Translation and Computer-aided Translation on Translators

    NASA Astrophysics Data System (ADS)

    Peng, Hao

    2018-03-01

    Under the context of globalization, communications between countries and cultures are becoming increasingly frequent, which make it imperative to use some techniques to help translate. This paper is to explore the influence of computer-aided translation on translators, which is derived from the field of the computer-aided translation (CAT) and machine translation (MT). Followed by an introduction to the development of machine and computer-aided translation, it then depicts the technologies practicable to translators, which are trying to analyze the demand of designing the computer-aided translation so far in translation practice, and optimize the designation of computer-aided translation techniques, and analyze its operability in translation. The findings underline the advantages and disadvantages of MT and CAT tools, and the serviceability and future development of MT and CAT technologies. Finally, this thesis probes into the impact of these new technologies on translators in hope that more translators and translation researchers can learn to use such tools to improve their productivity.

  6. Comparison of custom to standard TKA instrumentation with computed tomography.

    PubMed

    Ng, Vincent Y; Arnott, Lindsay; Li, Jia; Hopkins, Ronald; Lewis, Jamie; Sutphen, Sean; Nicholson, Lisa; Reader, Douglas; McShane, Michael A

    2014-08-01

    There is conflicting evidence whether custom instrumentation for total knee arthroplasty (TKA) improves component position compared to standard instrumentation. Studies have relied on long-limb radiographs limited to two-dimensional (2D) analysis and subjected to rotational inaccuracy. We used postoperative computed tomography (CT) to evaluate preoperative three-dimensional templating and CI to facilitate accurate and efficient implantation of TKA femoral and tibial components. We prospectively evaluated a single-surgeon cohort of 78 TKA patients (51 custom, 27 standard) with postoperative CT scans using 3D reconstruction and contour-matching technology to preoperative imaging. Component alignment was measured in coronal, sagittal and axial planes. Preoperative templating for custom instrumentation was 87 and 79 % accurate for femoral and tibial component size. All custom components were within 1 size except for the tibial component in one patient (2 sizes). Tourniquet time was 5 min longer for custom (30 min) than standard (25 min). In no case was custom instrumentation aborted in favour of standard instrumentation nor was original alignment of custom instrumentation required to be adjusted intraoperatively. There were more outliers greater than 2° from intended alignment with standard instrumentation than custom for both components in all three planes. Custom instrumentation was more accurate in component position for tibial coronal alignment (custom: 1.5° ± 1.2°; standard: 3° ± 1.9°; p = 0.0001) and both tibial (custom: 1.4° ± 1.1°; standard: 16.9° ± 6.8°; p < 0.0001) and femoral (custom: 1.2° ± 0.9°; standard: 3.1° ± 2.1°; p < 0.0001) rotational alignment, and was similar to standard instrumentation in other measurements. When evaluated with CT, custom instrumentation performs similar or better to standard instrumentation in component alignment and accurately templates component size. Tourniquet time was mildly increased for custom compared to standard.

  7. 26 CFR 52.4682-3 - Imported taxable products.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... is determined by reference to customs law. If the actual date is unknown, the importer may use any....1913 Digital automatic data processing machines w/cathode ray tube, not included in subheading 8471.20... Digital processing units w/entry value: Not > $100K 8471.91 CFC-113 0.4980 > $100K 8471.91 CFC-113 27.6667...

  8. The CCRI Electric Boat Program: A Partnership for Progress in Economic Development.

    ERIC Educational Resources Information Center

    Liston, Edward J.

    The Community College of Rhode Island (CCRI) has made a strong commitment to building partnerships with business and industry. CCRI's first customized training program was developed in 1982 with the National Tooling and Machine Association (NTMA), and was designed to enable apprentice machinists to receive the classroom training required to earn a…

  9. 77 FR 51067 - Investigations Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-23

    .... Brockport, PA...... 08/06/12 08/03/12 (Union). 81863 Industrial Machine & Farmington, MO..... 08/07/12 08/07... Industrial Sales (Workers). 81865 Sihi Pumps (Workers).... Grand Island, NY... 08/07/12 07/31/12 81866 Acme... Custom Technology, Windsor, CT........ 08/09/12 08/09/12 Inc., Engineering Design and Drafting Department...

  10. Education and the Role of the Educator in the Future

    ERIC Educational Resources Information Center

    Jukes, Ian; McCain, Ted; Crockett, Lee

    2011-01-01

    Exponential change is making our education system obsolete. New inventions will replace not only textbooks, but everything we currently think of as school. Teachers will need to rethink their roles as advances in teaching machines allow lessons to be customized to every child wherever that child might be. What is taught also will change because…

  11. Review: Polymeric-Based 3D Printing for Tissue Engineering.

    PubMed

    Wu, Geng-Hsi; Hsu, Shan-Hui

    Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.

  12. Application programs written by using customizing tools of a computer-aided design system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X.; Huang, R.; Juricic, D.

    1995-12-31

    Customizing tools of Computer-Aided Design Systems have been developed to such a degree as to become equivalent to powerful higher-level programming languages that are especially suitable for graphics applications. Two examples of application programs written by using AutoCAD`s customizing tools are given in some detail to illustrate their power. One tool uses AutoLISP list-processing language to develop an application program that produces four views of a given solid model. The other uses AutoCAD Developmental System, based on program modules written in C, to produce an application program that renders a freehand sketch from a given CAD drawing.

  13. Machining fixture layout optimization using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Dou, Jianping; Wang, Xingsong; Wang, Lei

    2011-05-01

    Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.

  14. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    PubMed

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  15. Computational dynamics of soft machines

    NASA Astrophysics Data System (ADS)

    Hu, Haiyan; Tian, Qiang; Liu, Cheng

    2017-06-01

    Soft machine refers to a kind of mechanical system made of soft materials to complete sophisticated missions, such as handling a fragile object and crawling along a narrow tunnel corner, under low cost control and actuation. Hence, soft machines have raised great challenges to computational dynamics. In this review article, recent studies of the authors on the dynamic modeling, numerical simulation, and experimental validation of soft machines are summarized in the framework of multibody system dynamics. The dynamic modeling approaches are presented first for the geometric nonlinearities of coupled overall motions and large deformations of a soft component, the physical nonlinearities of a soft component made of hyperelastic or elastoplastic materials, and the frictional contacts/impacts of soft components, respectively. Then the computation approach is outlined for the dynamic simulation of soft machines governed by a set of differential-algebraic equations of very high dimensions, with an emphasis on the efficient computations of the nonlinear elastic force vector of finite elements. The validations of the proposed approaches are given via three case studies, including the locomotion of a soft quadrupedal robot, the spinning deployment of a solar sail of a spacecraft, and the deployment of a mesh reflector of a satellite antenna, as well as the corresponding experimental studies. Finally, some remarks are made for future studies.

  16. Evaluation of a patient specific femoral alignment guide for hip resurfacing.

    PubMed

    Olsen, Michael; Naudie, Douglas D; Edwards, Max R; Sellan, Michael E; McCalden, Richard W; Schemitsch, Emil H

    2014-03-01

    A novel alternative to conventional instrumentation for femoral component insertion in hip resurfacing is a patient specific, computed tomography based femoral alignment guide. A benchside study using cadaveric femora was performed comparing a custom alignment guide to conventional instrumentation and computer navigation. A clinical series of twenty-five hip resurfacings utilizing a custom alignment guide was conducted by three surgeons experienced in hip resurfacing. Using cadaveric femora, the custom guide was comparable to conventional instrumentation with computer navigation proving superior to both. Clinical femoral component alignment accuracy was 3.7° and measured within ± 5° of plan in 20 of 24 cases. Patient specific femoral alignment guides provide a satisfactory level of accuracy and may be a better alternative to conventional instrumentation for initial femoral guidewire placement in hip resurfacing. Crown Copyright © 2014. All rights reserved.

  17. Automated Reporting of DXA Studies Using a Custom-Built Computer Program.

    PubMed

    England, Joseph R; Colletti, Patrick M

    2018-06-01

    Dual-energy x-ray absorptiometry (DXA) scans are a critical population health tool and relatively simple to interpret but can be time consuming to report, often requiring manual transfer of bone mineral density and associated statistics into commercially available dictation systems. We describe here a custom-built computer program for automated reporting of DXA scans using Pydicom, an open-source package built in the Python computer language, and regular expressions to mine DICOM tags for patient information and bone mineral density statistics. This program, easy to emulate by any novice computer programmer, has doubled our efficiency at reporting DXA scans and has eliminated dictation errors.

  18. Custom Sky-Image Mosaics from NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Collier, James; Craymer, Loring; Curkendall, David

    2005-01-01

    yourSkyG is the second generation of the software described in yourSky: Custom Sky-Image Mosaics via the Internet (NPO-30556), NASA Tech Briefs, Vol. 27, No. 6 (June 2003), page 45. Like its predecessor, yourSkyG supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. Whereas yourSky constructs mosaics on a local multiprocessor system, yourSkyG performs the computations on NASA s Information Power Grid (IPG), which is capable of performing much larger mosaicking tasks. (The IPG is high-performance computation and data grid that integrates geographically distributed 18 NASA Tech Briefs, September 2005 computers, databases, and instruments.) A user of yourSkyG can specify parameters describing a mosaic to be constructed. yourSkyG then constructs the mosaic on the IPG and makes it available for downloading by the user. The complexities of determining which input images are required to construct a mosaic, retrieving the required input images from remote sky-survey archives, uploading the images to the computers on the IPG, performing the computations remotely on the Grid, and downloading the resulting mosaic from the Grid are all transparent to the user

  19. Intelligence-Augmented Rat Cyborgs in Maze Solving.

    PubMed

    Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui

    2016-01-01

    Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.

  20. Intelligence-Augmented Rat Cyborgs in Maze Solving

    PubMed Central

    Yu, Yipeng; Pan, Gang; Gong, Yongyue; Xu, Kedi; Zheng, Nenggan; Hua, Weidong; Zheng, Xiaoxiang; Wu, Zhaohui

    2016-01-01

    Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains. PMID:26859299

Top