Submicron Systems Architecture Project
1981-11-01
This project is concerned with the architecture , design , and testing of VLSI Systems. The principal activities in this report period include: The Tree Machine; COPE, The Homogeneous Machine; Computational Arrays; Switch-Level Model for MOS Logic Design; Testing; Local Network and Designer Workstations; Self-timed Systems; Characterization of Deadlock Free Resource Contention; Concurrency Algebra; Language Design and Logic for Program Verification.
Servo scanning 3D micro EDM for array micro cavities using on-machine fabricated tool electrodes
NASA Astrophysics Data System (ADS)
Tong, Hao; Li, Yong; Zhang, Long
2018-02-01
Array micro cavities are useful in many fields including in micro molds, optical devices, biochips and so on. Array servo scanning micro electro discharge machining (EDM), using array micro electrodes with simple cross-sectional shape, has the advantage of machining complex 3D micro cavities in batches. In this paper, the machining errors caused by offline-fabricated array micro electrodes are analyzed in particular, and then a machining process of array servo scanning micro EDM is proposed by using on-machine fabricated array micro electrodes. The array micro electrodes are fabricated on-machine by combined procedures including wire electro discharge grinding, array reverse copying and electrode end trimming. Nine-array tool electrodes with Φ80 µm diameter and 600 µm length are obtained. Furthermore, the proposed process is verified by several machining experiments for achieving nine-array hexagonal micro cavities with top side length of 300 µm, bottom side length of 150 µm, and depth of 112 µm or 120 µm. In the experiments, a chip hump accumulates on the electrode tips like the built-up edge in mechanical machining under the conditions of brass workpieces, copper electrodes and the dielectric of deionized water. The accumulated hump can be avoided by replacing the water dielectric by an oil dielectric.
A micro-machined source transducer for a parametric array in air.
Lee, Haksue; Kang, Daesil; Moon, Wonkyu
2009-04-01
Parametric array applications in air, such as highly directional parametric loudspeaker systems, usually rely on large radiators to generate the high-intensity primary beams required for nonlinear interactions. However, a conventional transducer, as a primary wave projector, requires a great deal of electrical power because its electroacoustic efficiency is very low due to the large characteristic mechanical impedance in air. The feasibility of a micro-machined ultrasonic transducer as an efficient finite-amplitude wave projector was studied. A piezoelectric micro-machined ultrasonic transducer array consisting of lead zirconate titanate uni-morph elements was designed and fabricated for this purpose. Theoretical and experimental evaluations showed that a micro-machined ultrasonic transducer array can be used as an efficient source transducer for a parametric array in air. The beam patterns and propagation curves of the difference frequency wave and the primary wave generated by the micro-machined ultrasonic transducer array were measured. Although the theoretical results were based on ideal parametric array models, the theoretical data explained the experimental results reasonably well. These experiments demonstrated the potential of micro-machined primary wave projector.
Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard
2016-01-01
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592
NASA Astrophysics Data System (ADS)
Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard
2016-12-01
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.
Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard
2016-12-06
Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.
ERIC Educational Resources Information Center
Cool, Nate; Strimel, Greg J.; Croly, Michael; Grubbs, Michael E.
2017-01-01
To be technologically and engineering literate, people should be able to "make" or produce quality solutions to engineering design challenges while recognizing and understanding how to avoid hazards in a broad array of situations when properly using tools, machines, and materials (Haynie, 2009; Gunter, 2007; ITEA/ITEEA, 2000/2002/2007).…
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
Choi, Woong-Kirl; Kim, Seong-Hyun; Choi, Seung-Geon; Lee, Eun-Sang
2018-01-01
Ultra-precision products which contain a micro-hole array have recently shown remarkable demand growth in many fields, especially in the semiconductor and display industries. Photoresist etching and electrochemical machining are widely known as precision methods for machining micro-holes with no residual stress and lower surface roughness on the fabricated products. The Invar shadow masks used for organic light-emitting diodes (OLEDs) contain numerous micro-holes and are currently machined by a photoresist etching method. However, this method has several problems, such as uncontrollable hole machining accuracy, non-etched areas, and overcutting. To solve these problems, a machining method that combines photoresist etching and electrochemical machining can be applied. In this study, negative photoresist with a quadrilateral hole array pattern was dry coated onto 30-µm-thick Invar thin film, and then exposure and development were carried out. After that, photoresist single-side wet etching and a fusion method of wet etching-electrochemical machining were used to machine micro-holes on the Invar. The hole machining geometry, surface quality, and overcutting characteristics of the methods were studied. Wet etching and electrochemical fusion machining can improve the accuracy and surface quality. The overcutting phenomenon can also be controlled by the fusion machining. Experimental results show that the proposed method is promising for the fabrication of Invar film shadow masks. PMID:29351235
Assessment of metal ion concentration in water with structured feature selection.
Naula, Pekka; Airola, Antti; Pihlasalo, Sari; Montoya Perez, Ileana; Salakoski, Tapio; Pahikkala, Tapio
2017-10-01
We propose a cost-effective system for the determination of metal ion concentration in water, addressing a central issue in water resources management. The system combines novel luminometric label array technology with a machine learning algorithm that selects a minimal number of array reagents (modulators) and liquid sample dilutions, such that enable accurate quantification. The algorithm is able to identify the optimal modulators and sample dilutions leading to cost reductions since less manual labour and resources are needed. Inferring the ion detector involves a unique type of a structured feature selection problem, which we formalize in this paper. We propose a novel Cartesian greedy forward feature selection algorithm for solving the problem. The novel algorithm was evaluated in the concentration assessment of five metal ions and the performance was compared to two known feature selection approaches. The results demonstrate that the proposed system can assist in lowering the costs with minimal loss in accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Array servo scanning micro EDM of 3D micro cavities
NASA Astrophysics Data System (ADS)
Tong, Hao; Li, Yong; Yi, Futing
2011-05-01
Micro electro discharge machining (Micro EDM) is a non-traditional processing technology with the special advantages of low set-up cost and few cutting force in machining any conductive materials regardless of their hardness. As well known, die-sinking EDM is unsuitable for machining the complex 3D micro cavity less than 1mm due to the high-priced fabrication of 3D microelectrode itself and its serous wear during EDM process. In our former study, a servo scanning 3D micro-EDM (3D SSMEDM) method was put forward, and our experiments showed it was available to fabricate complex 3D micro-cavities. In this study, in order to improve machining efficiency and consistency accuracy for array 3D micro-cavities, an array-servo-scanning 3D micro EDM (3D ASSMEDM) method is presented considering the complementary advantages of the 3D SSMEDM and the array micro electrodes with simple cross-section. During 3D ASSMEDM process, the array cavities designed by CAD / CAM system can be batch-manufactured by servo scanning layer by layer using array-rod-like micro tool electrodes, and the axial wear of the array electrodes is compensated in real time by keeping discharge gap. To verify the effectiveness of the 3D ASSMEDM, the array-triangle-micro cavities (side length 630 μm) are batch-manufactured on P-doped silicon by applying the array-micro-electrodes with square-cross-section fabricated by LIGA process. Our exploratory experiment shows that the 3D ASSMEDM provides a feasible approach for the batch-manufacture of 3D array-micro-cavities of conductive materials.
NASA Astrophysics Data System (ADS)
Yan, X. Y.; Chen, G. X.; Liu, J. W.
2018-03-01
A kind of superhydrophobic copper surface with micro-nanocomposite structure has been successfully fabricated by employing a silk-screen printing aided electrochemical machining method. At first silk-screen printing technology has been used to form a column point array mask, and then the microcolumn array would be fabricated by electrochemical machining (ECM) effect. In this study, the drop contact angles have been studied and scanning electron microscopy (SEM) has been used to study the surface characteristic of the workpiece. The experiment results show that the micro-nanocomposite structure with cylindrical array can be successfully fabricated on the metal surface. And the maximum contact angle is 151° when the fluoroalkylsilane ethanol solution was used to modify the machined surface in this study.
Automated solar module assembly line
NASA Technical Reports Server (NTRS)
Bycer, M.
1980-01-01
The solar module assembly machine which Kulicke and Soffa delivered under this contract is a cell tabbing and stringing machine, and capable of handling a variety of cells and assembling strings up to 4 feet long which then can be placed into a module array up to 2 feet by 4 feet in a series of parallel arrangement, and in a straight or interdigitated array format. The machine cycle is 5 seconds per solar cell. This machine is primarily adapted to 3 inch diameter round cells with two tabs between cells. Pulsed heat is used as the bond technique for solar cell interconnects. The solar module assembly machine unloads solar cells from a cassette, automatically orients them, applies flux and solders interconnect ribbons onto the cells. It then inverts the tabbed cells, connects them into cell strings, and delivers them into a module array format using a track mounted vacuum lance, from which they are taken to test and cleaning benches prior to final encapsulation into finished solar modules. Throughout the machine the solar cell is handled very carefully, and any contact with the collector side of the cell is avoided or minimized.
Large Scale Analysis of Geospatial Data with Dask and XArray
NASA Astrophysics Data System (ADS)
Zender, C. S.; Hamman, J.; Abernathey, R.; Evans, K. J.; Rocklin, M.; Zender, C. S.; Rocklin, M.
2017-12-01
The analysis of geospatial data with high level languages has acceleratedinnovation and the impact of existing data resources. However, as datasetsgrow beyond single-machine memory, data structures within these high levellanguages can become a bottleneck. New libraries like Dask and XArray resolve some of these scalability issues,providing interactive workflows that are both familiar tohigh-level-language researchers while also scaling out to much largerdatasets. This broadens the access of researchers to larger datasets on highperformance computers and, through interactive development, reducestime-to-insight when compared to traditional parallel programming techniques(MPI). This talk describes Dask, a distributed dynamic task scheduler, Dask.array, amulti-dimensional array that copies the popular NumPy interface, and XArray,a library that wraps NumPy/Dask.array with labeled and indexes axes,implementing the CF conventions. We discuss both the basic design of theselibraries and how they change interactive analysis of geospatial data, and alsorecent benefits and challenges of distributed computing on clusters ofmachines.
Dynamically programmable cache
NASA Astrophysics Data System (ADS)
Nakkar, Mouna; Harding, John A.; Schwartz, David A.; Franzon, Paul D.; Conte, Thomas
1998-10-01
Reconfigurable machines have recently been used as co- processors to accelerate the execution of certain algorithms or program subroutines. The problems with the above approach include high reconfiguration time and limited partial reconfiguration. By far the most critical problems are: (1) the small on-chip memory which results in slower execution time, and (2) small FPGA areas that cannot implement large subroutines. Dynamically Programmable Cache (DPC) is a novel architecture for embedded processors which offers solutions to the above problems. To solve memory access problems, DPC processors merge reconfigurable arrays with the data cache at various cache levels to create a multi-level reconfigurable machines. As a result DPC machines have both higher data accessibility and FPGA memory bandwidth. To solve the limited FPGA resource problem, DPC processors implemented multi-context switching (Virtualization) concept. Virtualization allows implementation of large subroutines with fewer FPGA cells. Additionally, DPC processors can parallelize the execution of several operations resulting in faster execution time. In this paper, the speedup improvement for DPC machines are shown to be 5X faster than an Altera FLEX10K FPGA chip and 2X faster than a Sun Ultral SPARC station for two different algorithms (convolution and motion estimation).
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; de Jesus Romero-Troncoso, Rene
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node. PMID:22163602
Moreno-Tapia, Sandra Veronica; Vera-Salas, Luis Alberto; Osornio-Rios, Roque Alfredo; Dominguez-Gonzalez, Aurelio; Stiharu, Ion; Romero-Troncoso, Rene de Jesus
2010-01-01
Computer numerically controlled (CNC) machines have evolved to adapt to increasing technological and industrial requirements. To cover these needs, new generation machines have to perform monitoring strategies by incorporating multiple sensors. Since in most of applications the online Processing of the variables is essential, the use of smart sensors is necessary. The contribution of this work is the development of a wireless network platform of reconfigurable smart sensors for CNC machine applications complying with the measurement requirements of new generation CNC machines. Four different smart sensors are put under test in the network and their corresponding signal processing techniques are implemented in a Field Programmable Gate Array (FPGA)-based sensor node.
Analyzing Array Manipulating Programs by Program Transformation
NASA Technical Reports Server (NTRS)
Cornish, J. Robert M.; Gange, Graeme; Navas, Jorge A.; Schachte, Peter; Sondergaard, Harald; Stuckey, Peter J.
2014-01-01
We explore a transformational approach to the problem of verifying simple array-manipulating programs. Traditionally, verification of such programs requires intricate analysis machinery to reason with universally quantified statements about symbolic array segments, such as "every data item stored in the segment A[i] to A[j] is equal to the corresponding item stored in the segment B[i] to B[j]." We define a simple abstract machine which allows for set-valued variables and we show how to translate programs with array operations to array-free code for this machine. For the purpose of program analysis, the translated program remains faithful to the semantics of array manipulation. Based on our implementation in LLVM, we evaluate the approach with respect to its ability to extract useful invariants and the cost in terms of code size.
The performance of disk arrays in shared-memory database machines
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Hong, Wei
1993-01-01
In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.
Micro-Machined High-Frequency (80 MHz) PZT Thick Film Linear Arrays
Zhou, Qifa; Wu, Dawei; Liu, Changgeng; Zhu, Benpeng; Djuth, Frank; Shung, K. Kirk
2010-01-01
This paper presents the development of a micro-machined high-frequency linear array using PZT piezoelectric thick films. The linear array has 32 elements with an element width of 24 μm and an element length of 4 mm. Array elements were fabricated by deep reactive ion etching of PZT thick films, which were prepared from spin-coating of PZT solgel composite. Detailed fabrication processes, especially PZT thick film etching conditions and a novel transferring-and-etching method, are presented and discussed. Array designs were evaluated by simulation. Experimental measurements show that the array had a center frequency of 80 MHz and a fractional bandwidth (−6 dB) of 60%. An insertion loss of −41 dB and adjacent element crosstalk of −21 dB were found at the center frequency. PMID:20889407
Robotic inspection of fiber reinforced composites using phased array UT
NASA Astrophysics Data System (ADS)
Stetson, Jeffrey T.; De Odorico, Walter
2014-02-01
Ultrasound is the current NDE method of choice to inspect large fiber reinforced airframe structures. Over the last 15 years Cartesian based scanning machines using conventional ultrasound techniques have been employed by all airframe OEMs and their top tier suppliers to perform these inspections. Technical advances in both computing power and commercially available, multi-axis robots now facilitate a new generation of scanning machines. These machines use multiple end effector tools taking full advantage of phased array ultrasound technologies yielding substantial improvements in inspection quality and productivity. This paper outlines the general architecture for these new robotic scanning systems as well as details the variety of ultrasonic techniques available for use with them including advances such as wide area phased array scanning and sound field adaptation for non-flat, non-parallel surfaces.
Precise on-machine extraction of the surface normal vector using an eddy current sensor array
NASA Astrophysics Data System (ADS)
Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun
2016-11-01
To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.
JPRS Report, Science & Technology, China, High-Performance Computer Systems
1992-10-28
microprocessor array The microprocessor array in the AP85 system is com- posed of 16 completely identical array element micro - processors . Each array element...microprocessors and capable of host machine reading and writing. The memory capacity of the array element micro - processors as a whole can be expanded...transmission functions to carry out data transmission from array element micro - processor to array element microprocessor, from array element
2018-01-01
As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation. PMID:29329248
Jung, Haejoon; Lee, In-Ho
2018-01-12
As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.
Users' Manual and Installation Guide for the EverVIEW Slice and Dice Tool (Version 1.0 Beta)
Roszell, Dustin; Conzelmann, Craig; Chimmula, Sumani; Chandrasekaran, Anuradha; Hunnicut, Christina
2009-01-01
Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need arose for additional tools to view and manipulate NetCDF datasets, specifically to create subsets of large NetCDF files. To address this need, we created the EverVIEW Slice and Dice Tool to allow users to create subsets of grid-based NetCDF files. The major functions of this tool are (1) to subset NetCDF files both spatially and temporally; (2) to view the NetCDF data in table form; and (3) to export filtered data to a comma-separated value file format.
Service Modules for Coal Extraction
NASA Technical Reports Server (NTRS)
Gangal, M. D.; Lewis, E. V.
1985-01-01
Service train follows group of mining machines, paying out utility lines as machines progress into coal face. Service train for four mining machines removes gases and coal and provides water and electricity. Flexible, coiling armored carriers protect cables and hoses. High coal production attained by arraying row of machines across face, working side by side.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun
2004-04-01
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
A comparison of machine learning and Bayesian modelling for molecular serotyping.
Newton, Richard; Wernisch, Lorenz
2017-08-11
Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.
Fabrication of micro-lens array on convex surface by meaning of micro-milling
NASA Astrophysics Data System (ADS)
Zhang, Peng; Du, Yunlong; Wang, Bo; Shan, Debin
2014-08-01
In order to develop the application of the micro-milling technology, and to fabricate ultra-precision optical surface with complex microstructure, in this paper, the primary experimental research on micro-milling complex microstructure array is carried out. A complex microstructure array surface with vary parameters is designed, and the mathematic model of the surface is set up and simulated. For the fabrication of the designed microstructure array surface, a micro three-axis ultra-precision milling machine tool is developed, aerostatic guideway drove directly by linear motor is adopted in order to guarantee the enough stiffness of the machine, and novel numerical control strategy with linear encoders of 5nm resolution used as the feedback of the control system is employed to ensure the extremely high motion control accuracy. With the help of CAD/CAM technology, convex micro lens array on convex spherical surface with different scales on material of polyvinyl chloride (PVC) and pure copper is fabricated using micro tungsten carbide ball end milling tool based on the ultra-precision micro-milling machine. Excellent nanometer-level micro-movement performance of the axis is proved by motion control experiment. The fabrication is nearly as the same as the design, the characteristic scale of the microstructure is less than 200μm and the accuracy is better than 1μm. It prove that ultra-precision micro-milling technology based on micro ultra-precision machine tool is a suitable and optional method for micro manufacture of microstructure array surface on different kinds of materials, and with the development of micro milling cutter, ultraprecision micro-milling complex microstructure surface will be achieved in future.
A new measuring machine in Paris
NASA Technical Reports Server (NTRS)
Guibert, J.; Charvin, P.
1984-01-01
A new photographic measuring machine is under construction at the Paris Observatory. The amount of transmitted light is measured by a linear array of 1024 photodiodes. Carriage control, data acquisition and on line processing are performed by microprocessors, a S.E.L. 32/27 computer, and an AP 120-B Array Processor. It is expected that a Schmidt telescope plate of size 360 mm square will be scanned in one hour with pixel size of ten microns.
Li, Ning; Cao, Chao; Wang, Cong
2017-06-15
Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Generation of Custom DSP Transform IP Cores: Case Study Walsh-Hadamard Transform
2002-09-01
mathematics and hardware design What I know: Finite state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing...state machine Pipelining Systolic array … What I know: Linear algebra Digital signal processing Adaptive filter theory … A math guy A hardware engineer...Synthesis Technology Libary Bit-width (8) HF factor (1,2,3,6) VF factor (1,2,4, ... 32) Xilinx FPGA Place&Route Xilinx FPGA Place&Route Performance
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
30 CFR 56.14107 - Moving machine parts.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Moving machine parts. 56.14107 Section 56.14107 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Safety Devices and Maintenance Requirements § 56.14107 Moving machine parts. (a) Moving machine parts...
30 CFR 57.14107 - Moving machine parts.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Moving machine parts. 57.14107 Section 57.14107 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE... Equipment Safety Devices and Maintenance Requirements § 57.14107 Moving machine parts. (a) Moving machine...
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays
Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping
2017-01-01
The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system’s redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation. PMID:29156577
Magnetic Flux Distribution of Linear Machines with Novel Three-Dimensional Hybrid Magnet Arrays.
Yao, Nan; Yan, Liang; Wang, Tianyi; Wang, Shaoping
2017-11-18
The objective of this paper is to propose a novel tubular linear machine with hybrid permanent magnet arrays and multiple movers, which could be employed for either actuation or sensing technology. The hybrid magnet array produces flux distribution on both sides of windings, and thus helps to increase the signal strength in the windings. The multiple movers are important for airspace technology, because they can improve the system's redundancy and reliability. The proposed design concept is presented, and the governing equations are obtained based on source free property and Maxwell equations. The magnetic field distribution in the linear machine is thus analytically formulated by using Bessel functions and harmonic expansion of magnetization vector. Numerical simulation is then conducted to validate the analytical solutions of the magnetic flux field. It is proved that the analytical model agrees with the numerical results well. Therefore, it can be utilized for the formulation of signal or force output subsequently, depending on its particular implementation.
An M-step preconditioned conjugate gradient method for parallel computation
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.
Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A
2014-01-28
A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.
Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines
Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing
2014-01-01
m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933
NASA Technical Reports Server (NTRS)
Rickard, D. A.; Bodenheimer, R. E.
1976-01-01
Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.
Whole-machine calibration approach for phased array radar with self-test
NASA Astrophysics Data System (ADS)
Shen, Kai; Yao, Zhi-Cheng; Zhang, Jin-Chang; Yang, Jian
2017-06-01
The performance of the missile-borne phased array radar is greatly influenced by the inter-channel amplitude and phase inconsistencies. In order to ensure its performance, the amplitude and the phase characteristics of radar should be calibrated. Commonly used methods mainly focus on antenna calibration, such as FFT, REV, etc. However, the radar channel also contains T / R components, channels, ADC and messenger. In order to achieve on-based phased array radar amplitude information for rapid machine calibration and compensation, we adopt a high-precision plane scanning test platform for phase amplitude test. A calibration approach for the whole channel system based on the radar frequency source test is proposed. Finally, the advantages and the application prospect of this approach are analysed.
Using Multiple FPGA Architectures for Real-time Processing of Low-level Machine Vision Functions
Thomas H. Drayer; William E. King; Philip A. Araman; Joseph G. Tront; Richard W. Conners
1995-01-01
In this paper, we investigate the use of multiple Field Programmable Gate Array (FPGA) architectures for real-time machine vision processing. The use of FPGAs for low-level processing represents an excellent tradeoff between software and special purpose hardware implementations. A library of modules that implement common low-level machine vision operations is presented...
Rapid fabrication of miniature lens arrays by four-axis single point diamond machining
McCall, Brian; Tkaczyk, Tomasz S.
2013-01-01
A novel method for fabricating lens arrays and other non-rotationally symmetric free-form optics is presented. This is a diamond machining technique using 4 controlled axes of motion – X, Y, Z, and C. As in 3-axis diamond micro-milling, a diamond ball endmill is mounted to the work spindle of a 4-axis ultra-precision computer numerical control (CNC) machine. Unlike 3-axis micro-milling, the C-axis is used to hold the cutting edge of the tool in contact with the lens surface for the entire cut. This allows the feed rates to be doubled compared to the current state of the art of micro-milling while producing an optically smooth surface with very low surface form error and exceptionally low radius error. PMID:23481813
Combined passive bearing element/generator motor
Post, Richard F.
2000-01-01
An electric machine includes a cylindrical rotor made up of an array of permanent magnets that provide a N-pole magnetic field of even order (where N=4, 6, 8, etc.). This array of permanent magnets has bars of identical permanent magnets made of dipole elements where the bars are assembled in a circle. A stator inserted down the axis of the dipole field is made of two sets of windings that are electrically orthogonal to each other, where one set of windings provides stabilization of the stator and the other set of windings couples to the array of permanent magnets and acts as the windings of a generator/motor. The rotor and the stator are horizontally disposed, and the rotor is on the outside of said stator. The electric machine may also include two rings of ferromagnetic material. One of these rings would be located at each end of the rotor. Two levitator pole assemblies are attached to a support member that is external to the electric machine. These levitator pole assemblies interact attractively with the rings of ferromagnetic material to produce a levitating force upon the rotor.
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
Fabrication of five-level ultraplanar micromirror arrays by flip-chip assembly
NASA Astrophysics Data System (ADS)
Michalicek, M. Adrian; Bright, Victor M.
2001-10-01
This paper reports a detailed study of the fabrication of various piston, torsion, and cantilever style micromirror arrays using a novel, simple, and inexpensive flip-chip assembly technique. Several rectangular and polar arrays were commercially prefabricated in the MUMPs process and then flip-chip bonded to form advanced micromirror arrays where adverse effects typically associated with surface micromachining were removed. These arrays were bonded by directly fusing the MUMPs gold layers with no complex preprocessing. The modules were assembled using a computer-controlled, custom-built flip-chip bonding machine. Topographically opposed bond pads were designed to correct for slight misalignment errors during bonding and typically result in less than 2 micrometers of lateral alignment error. Although flip-chip micromirror performance is briefly discussed, the means used to create these arrays is the focus of the paper. A detailed study of flip-chip process yield is presented which describes the primary failure mechanisms for flip-chip bonding. Studies of alignment tolerance, bonding force, stress concentration, module planarity, bonding machine calibration techniques, prefabrication errors, and release procedures are presented in relation to specific observations in process yield. Ultimately, the standard thermo-compression flip-chip assembly process remains a viable technique to develop highly complex prototypes of advanced micromirror arrays.
Agar, John W. M.; Perkins, Anthony; Tjipto, Alwie
2012-01-01
Summary Background and objectives Hemodialysis resource use—especially water and power, smarter processing and reuse of postdialysis waste, and improved ecosensitive building design, insulation, and space use—all need much closer attention. Regarding power, as supply diminishes and costs rise, alternative power augmentation for dialysis services becomes attractive. The first 12 months of a solar-assisted dialysis program in southeastern Australia is reported. Design, setting, participants, & measurements A 24-m2, 3-kWh rated solar array and inverter—total cost of A$16,219—has solar-assisted the dialysis-related power needs of a four-chair home hemodialysis training service. All array-created, grid-donated power and all grid-drawn power to the four hemodialysis machines and minireverse osmosis plant pairings are separately metered. After the grid-drawn and array-generated kilowatt hours have been billed and reimbursed at their respective commercial rates, financial viability, including capital repayment, can be assessed. Results From July of 2010 to July of 2011, the four combined equipment pairings used 4166.5 kWh, 9% more than the array-generated 3811.0 kWh. Power consumption at 26.7 c/kWh cost A$1145.79. Array-generated power reimbursements at 23.5 c/kWh were A$895.59. Power costs were, thus, reduced by 76.5%. As new reimbursement rates (60 c/kWh) take effect, system reimbursements will more than double, allowing both free power and potential capital pay down over 7.7 years. With expected array life of ∼30 years, free power and an income stream should accrue in the second and third operative decades. Conclusions Solar-assisted power is feasible and cost-effective. Dialysis services should assess their local solar conditions and determine whether this ecosensitive power option might suit their circumstance. PMID:22223614
Filtering NetCDF Files by Using the EverVIEW Slice and Dice Tool
Conzelmann, Craig; Romañach, Stephanie S.
2010-01-01
Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. It was created to provide a common interface between applications and real-time meteorological and other scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need surfaced for additional tools to view and manipulate NetCDF datasets, specifically to filter the files by creating subsets of large NetCDF files. The U.S. Geological Survey (USGS) and the Joint Ecosystem Modeling (JEM) group are working to address these needs with applications like the EverVIEW Slice and Dice Tool, which allows users to filter grid-based NetCDF files, thus targeting those data most important to them. The major functions of this tool are as follows: (1) to create subsets of NetCDF files temporally, spatially, and by data value; (2) to view the NetCDF data in table form; and (3) to export the filtered data to a comma-separated value (CSV) file format. The USGS and JEM will continue to work with scientists and natural resource managers across The Everglades to solve complex restoration problems through technological advances.
Free-form machining for micro-imaging systems
NASA Astrophysics Data System (ADS)
Barkman, Michael L.; Dutterer, Brian S.; Davies, Matthew A.; Suleski, Thomas J.
2008-02-01
While mechanical ruling and single point diamond turning has been a mainstay of optical fabrication for many years, many types of micro-optical devices and structures are not conducive to simple diamond turning or ruling, such as, for example, microlens arrays, and optical surfaces with non-radial symmetry. More recent developments in machining technology have enabled significant expansion of fabrication capabilities. Modern machine tools can generate complex three-dimensional structures with optical quality surface finish, and fabricate structures across a dynamic range of dimensions not achievable with lithographic techniques. In particular, five-axis free-form micromachining offers a great deal of promise for realization of essentially arbitrary surface structures, including surfaces not realizable through binary or analog lithographic techniques. Furthermore, these machines can generate geometric features with optical finish on scales ranging from centimeters to micrometers with accuracies of 10s of nanometers. In this paper, we discuss techniques and applications of free-form surface machining of micro-optical elements. Aspects of diamond machine tool design to realize desired surface geometries in specific materials are discussed. Examples are presented, including fabrication of aspheric lens arrays in germanium for compact infrared imaging systems. Using special custom kinematic mounting equipment and the additional axes of the machine, the lenses were turned with surface finish better than 2 nm RMS and center to center positioning accuracy of +/-0.5 μm.
Defect Detectability Improvement for Conventional Friction Stir Welds
NASA Technical Reports Server (NTRS)
Hill, Chris
2013-01-01
This research was conducted to evaluate the effects of defect detectability via phased array ultrasound technology in conventional friction stir welds by comparing conventionally prepped post weld surfaces to a machined surface finish. A machined surface is hypothesized to improve defect detectability and increase material strength.
Molecular Machine-Based Active Plasmonics
2011-07-21
C. S. Lin, M. Lu, T. Gao, T.J. Huang), J. Appl. Phys. 2010, 108, 043514. 33. Ordered Au nanodisk and nanohole arrays: fabrication and applications...Stoddart), J. Am. Chem. Soc. 2011, 133, 4538–4547. 41. Frequency-addressed tunable transmission in optically thin metallic nanohole arrays with
Radio Frequency Interference Detection using Machine Learning.
NASA Astrophysics Data System (ADS)
Mosiane, Olorato; Oozeer, Nadeem; Aniyan, Arun; Bassett, Bruce A.
2017-05-01
Radio frequency interference (RFI) has plagued radio astronomy which potentially might be as bad or worse by the time the Square Kilometre Array (SKA) comes up. RFI can be either internal (generated by instruments) or external that originates from intentional or unintentional radio emission generated by man. With the huge amount of data that will be available with up coming radio telescopes, an automated aproach will be required to detect RFI. In this paper to try automate this process we present the result of applying machine learning techniques to cross match RFI from the Karoo Array Telescope (KAT-7) data. We found that not all the features selected to characterise RFI are always important. We further investigated 3 machine learning techniques and conclude that the Random forest classifier performs with a 98% Area Under Curve and 91% recall in detecting RFI.
Simplify and Accelerate Earth Science Data Preparation to Systemize Machine Learning
NASA Astrophysics Data System (ADS)
Kuo, K. S.; Rilee, M. L.; Oloso, A.
2017-12-01
Data preparation is the most laborious and time-consuming part of machine learning. The effort required is usually more than linearly proportional to the varieties of data used. From a system science viewpoint, useful machine learning in Earth Science likely involves diverse datasets. Thus, simplifying data preparation to ease the systemization of machine learning in Earth Science is of immense value. The technologies we have developed and applied to an array database, SciDB, are explicitly designed for the purpose, including the innovative SpatioTemporal Adaptive-Resolution Encoding (STARE), a remapping tool suite, and an efficient implementation of connected component labeling (CCL). STARE serves as a universal Earth data representation that homogenizes data varieties and facilitates spatiotemporal data placement as well as alignment, to maximize query performance on massively parallel, distributed computing resources for a major class of analysis. Moreover, it converts spatiotemporal set operations into fast and efficient integer interval operations, supporting in turn moving-object analysis. Integrative analysis requires more than overlapping spatiotemporal sets. For example, meaningful comparison of temperature fields obtained with different means and resolutions requires their transformation to the same grid. Therefore, remapping has been implemented to enable integrative analysis. Finally, Earth Science investigations are generally studies of phenomena, e.g. tropical cyclone, atmospheric river, and blizzard, through their associated events, like hurricanes Katrina and Sandy. Unfortunately, except for a few high-impact phenomena, comprehensive episodic records are lacking. Consequently, we have implemented an efficient CCL tracking algorithm, enabling event-based investigations within climate data records beyond mere event presence. In summary, we have implemented the core unifying capabilities on a Big Data technology to enable systematic machine learning in Earth Science.
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
NASA Astrophysics Data System (ADS)
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
Halbach arrays in precision motion control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trumper, D.L.; Williams, M.E.
1995-02-01
The Halbach array was developed for use as an optical element in particle accelerators. Following up on a suggestion from Klaus Halbach, the authors have investigated the utility of such arrays as the permanent magnet structure for synchronous machines in cartesian, polar, and cylindrical geometries. Their work has focused on the design of a novel Halbach array linear motor for use in a magnetic suspension stage for photolithography. This paper presents the details of the motor design and its force and power characteristics.
Fracture Tests of Etched Components Using a Focused Ion Beam Machine
NASA Technical Reports Server (NTRS)
Kuhn, Jonathan, L.; Fettig, Rainer K.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon; Powers, Edward I. (Technical Monitor)
2000-01-01
Many optical MEMS device designs involve large arrays of thin (0.5 to 1 micron components subjected to high stresses due to cyclic loading. These devices are fabricated from a variety of materials, and the properties strongly depend on size and processing. Our objective is to develop standard and convenient test methods that can be used to measure the properties of large numbers of witness samples, for every device we build. In this work we explore a variety of fracture test configurations for 0.5 micron thick silicon nitride membranes machined using the Reactive Ion Etching (RIE) process. Testing was completed using an FEI 620 dual focused ion beam milling machine. Static loads were applied using a probe. and dynamic loads were applied through a piezo-electric stack mounted at the base of the probe. Results from the tests are presented and compared, and application for predicting fracture probability of large arrays of devices are considered.
NASA Technical Reports Server (NTRS)
Muellerschoen, R. J.
1988-01-01
A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth A.; Kim, Hak; Phan, Anthony; Seidleck, Christina
2014-01-01
Finite state-machines (FSMs) are used to control operational flow in application specific integrated circuits (ASICs) and field programmable gate array (FPGA) devices. Because of their ease of interpretation, FSMs simplify the design and verification process and consequently are significant components in a synchronous design.
Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.
Zhu, Zhiwei; To, Suet; Zhang, Shaojian
2015-08-10
Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.
Halbach array DC motor/generator
Merritt, B.T.; Dreifuerst, G.R.; Post, R.F.
1998-01-06
A new configuration of DC motor/generator is based on a Halbach array of permanent magnets. This motor does not use ferrous materials so that the only losses are winding losses and losses due to bearings and windage. An ``inside-out`` design is used as compared to a conventional motor/generator design. The rotating portion, i.e., the rotor, is on the outside of the machine. The stationary portion, i.e., the stator, is formed by the inside of the machine. The rotor contains an array of permanent magnets that provide a uniform field. The windings of the motor are placed in or on the stator. The stator windings are then ``switched`` or ``commutated`` to provide a DC motor/generator much the same as in a conventional DC motor. The commutation can be performed by mechanical means using brushes or by electronic means using switching circuits. The invention is useful in electric vehicles and adjustable speed DC drives. 17 figs.
Halbach array DC motor/generator
Merritt, Bernard T.; Dreifuerst, Gary R.; Post, Richard F.
1998-01-01
A new configuration of DC motor/generator is based on a Halbach array of permanent magnets. This motor does not use ferrous materials so that the only losses are winding losses and losses due to bearings and windage. An "inside-out" design is used as compared to a conventional motor/generator design. The rotating portion, i.e., the rotor, is on the outside of the machine. The stationary portion, i.e., the stator, is formed by the inside of the machine. The rotor contains an array of permanent magnets that provide a uniform field. The windings of the motor are placed in or on the stator. The stator windings are then "switched" or "commutated" to provide a DC motor/generator much the same as in a conventional DC motor. The commutation can be performed by mechanical means using brushes or by electronic means using switching circuits. The invention is useful in electric vehicles and adjustable speed DC drives.
NASA Astrophysics Data System (ADS)
Cogoljević, Dušan; Alizamir, Meysam; Piljan, Ivan; Piljan, Tatjana; Prljić, Katarina; Zimonjić, Stefan
2018-04-01
The linkage between energy resources and economic development is a topic of great interest. Research in this area is also motivated by contemporary concerns about global climate change, carbon emissions fluctuating crude oil prices, and the security of energy supply. The purpose of this research is to develop and apply the machine learning approach to predict gross domestic product (GDP) based on the mix of energy resources. Our results indicate that GDP predictive accuracy can be improved slightly by applying a machine learning approach.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Translations on USSR Resources, Number 767.
1978-01-19
photography and so on). The amount of data obtained as a result of additional surveys makes it possible to evaluate the intensity and configuration...machine tools , chemical products, refrigerators, as well as potatoes and products of livestock breeding. The Kazakh SSR made an enormous leap in its...of the fuel and water power resources of Georgia, Azerbaydzhan and Armenia. Petroleum, transport and electrical machine building, machine tool
An element search ant colony technique for solving virtual machine placement problem
NASA Astrophysics Data System (ADS)
Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.
2017-09-01
The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.
Condition monitoring of Electric Components
NASA Astrophysics Data System (ADS)
Zaman, Ishtiaque
A universal non-intrusive model of a flexible antenna array is presented in this paper to monitor and identify the failures in electric machines. This adjustable antenna is designed to serve the purpose of condition monitoring of a vast range of electrical components including Induction Motor (IM), Printed Circuit Board (PCB), Synchronous Reluctance Motor (SRM), Permanent Magnet Synchronous Machine (PMSM) etc. by capturing the low frequency magnetic field radiated around these machines. The basic design and specification of the proposed antenna array for low frequency components is portrayed first. The design of the antenna is adjustable to fit for an extensive variety of segments. Subsequent to distinguishing the design and specifications of the antenna, the ideal area of the most delicate stray field has been identified for healthy current streaming around the machineries. Following this, short circuit representing faulty situation has been introduced and compared with the healthy cases. Precision has been found recognizing the faults using this one generic model of Antenna and the results are presented for three different machines i.e. IM, SRM and PMSM. Finite element method has been used to design the antenna and detect the optimum location and faults in the machines. Finally, a 3D Printer is proposed to be employed to build the antenna as per the details tended to in this paper contingent upon the power segments.
Self-assembling fluidic machines
NASA Astrophysics Data System (ADS)
Grzybowski, Bartosz A.; Radkowski, Michal; Campbell, Christopher J.; Lee, Jessamine Ng; Whitesides, George M.
2004-03-01
This letter describes dynamic self-assembly of two-component rotors floating at the interface between liquid and air into simple, reconfigurable mechanical systems ("machines"). The rotors are powered by an external, rotating magnetic field, and their positions within the interface are controlled by: (i) repulsive hydrodynamic interactions between them and (ii) by localized magnetic fields produced by an array of small electromagnets located below the plane of the interface. The mechanical functions of the machines depend on the spatiotemporal sequence of activation of the electromagnets.
Experimental Realization of a Quantum Support Vector Machine
NASA Astrophysics Data System (ADS)
Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng
2015-04-01
The fundamental principle of artificial intelligence is the ability of machines to learn from previous experience and do future work accordingly. In the age of big data, classical learning machines often require huge computational resources in many practical cases. Quantum machine learning algorithms, on the other hand, could be exponentially faster than their classical counterparts by utilizing quantum parallelism. Here, we demonstrate a quantum machine learning algorithm to implement handwriting recognition on a four-qubit NMR test bench. The quantum machine learns standard character fonts and then recognizes handwritten characters from a set with two candidates. Because of the wide spread importance of artificial intelligence and its tremendous consumption of computational resources, quantum speedup would be extremely attractive against the challenges of big data.
Execution time supports for adaptive scientific algorithms on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
Execution time support for scientific programs on distributed memory machines
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel; Scroggs, Jeffrey
1990-01-01
Optimizations are considered that are required for efficient execution of code segments that consists of loops over distributed data structures. The PARTI (Parallel Automated Runtime Toolkit at ICASE) execution time primitives are designed to carry out these optimizations and can be used to implement a wide range of scientific algorithms on distributed memory machines. These primitives allow the user to control array mappings in a way that gives an appearance of shared memory. Computations can be based on a global index set. Primitives are used to carry out gather and scatter operations on distributed arrays. Communications patterns are derived at runtime, and the appropriate send and receive messages are automatically generated.
Bonding machine for forming a solar array strip
NASA Technical Reports Server (NTRS)
Costogue, E. N.; Downing, R. G.; Middleton, O.; Mueller, R. L.; Yasui, R. K.; Cairo, F. J.; Person, J. K. (Inventor)
1979-01-01
A machine is described for attaching solar cells to a flexable substrate on which printed circuitry has been deposited. The strip is fed through: (1) a station in which solar cells are elevated into engagement with solder pads for the printed circuitry and thereafter heated by an infrared lamp; (2) a station at which flux and solder residue is removed; (3) a station at which electrical performance of the soldered cells is determined; (4) a station at which an encapsulating resin is deposited on the cells; (5) a station at which the encapsulated solar cells are examined for electrical performance; and (6) a final station at which the resulting array is wound on a takeup drum.
30 CFR 75.1719-4 - Mining machines, cap lamps; requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Mining machines, cap lamps; requirements. 75... Mining machines, cap lamps; requirements. (a) Paint used on exterior surfaces of mining machines shall... frames or reflecting tape shall be installed on each end of mining machines, except that continuous...
30 CFR 75.1719-4 - Mining machines, cap lamps; requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Mining machines, cap lamps; requirements. 75... Mining machines, cap lamps; requirements. (a) Paint used on exterior surfaces of mining machines shall... frames or reflecting tape shall be installed on each end of mining machines, except that continuous...
Use of microsecond current prepulse for dramatic improvements of wire array Z-pinch implosion
NASA Astrophysics Data System (ADS)
Calamy, H.; Lassalle, F.; Loyen, A.; Zucchini, F.; Chittenden, J. P.; Hamann, F.; Maury, P.; Georges, A.; Bedoch, J. P.; Morell, A.
2008-01-01
The Sphinx machine [F. Lassalle et al., "Status on the SPHINX machine based on the 1microsecond LTD technology"] based on microsecond linear transformer driver (LTD) technology is used to implode an aluminium wire array with an outer diameter up to 140mm and maximum current from 3.5to5MA. 700to800ns implosion Z-pinch experiments are performed on this driver essentially with aluminium. Best results obtained before the improvement described in this paper were 1-3TW radial total power, 100-300kJ total yield, and 20-30kJ energy above 1keV. An auxiliary generator was added to the Sphinx machine in order to allow a multi microsecond current to be injected through the wire array load before the start of the main current. Amplitude and duration of this current prepulse are adjustable, with maxima ˜10kA and 50μs. This prepulse dramatically changes the ablation phase leading to an improvement of the axial homogeneity of both the implosion and the final radiating column. Total power was multiplied by a factor of 6, total yield by a factor of 2.5 with a reproducible behavior. This paper presents experimental results, magnetohydrodynamic simulations, and analysis of the effect of such a long current prepulse.
Freeform diamond machining of complex monolithic metal optics for integral field systems
NASA Astrophysics Data System (ADS)
Dubbeldam, Cornelis M.; Robertson, David J.; Preuss, Werner
2004-09-01
Implementation of the optical designs of image slicing Integral Field Systems requires accurate alignment of a large number of small (and therefore difficult to manipulate) optical components. In order to facilitate the integration of these complex systems, the Astronomical Instrumentation Group (AIG) of the University of Durham, in collaboration with the Labor für Mikrozerspanung (Laboratory for Precision Machining - LFM) of the University of Bremen, have developed a technique for fabricating monolithic multi-faceted mirror arrays using freeform diamond machining. Using this technique, the inherent accuracy of the diamond machining equipment is exploited to achieve the required relative alignment accuracy of the facets, as well as an excellent optical surface quality for each individual facet. Monolithic arrays manufactured using this freeform diamond machining technique were successfully applied in the Integral Field Unit for the GEMINI Near-InfraRed Spectrograph (GNIRS IFU), which was recently installed at GEMINI South. Details of their fabrication process and optical performance are presented in this paper. In addition, the direction of current development work, conducted under the auspices of the Durham Instrumentation R&D Program supported by the UK Particle Physics and Astronomy Research Council (PPARC), will be discussed. The main emphasis of this research is to improve further the optical performance of diamond machined components, as well as to streamline the production and quality control processes with a view to making this technique suitable for multi-IFU instruments such as KMOS etc., which require series production of large quantities of optical components.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Ship localization in Santa Barbara Channel using machine learning classifiers.
Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter
2017-11-01
Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.
NASA Technical Reports Server (NTRS)
Sadowy, Gregory; Tanelli, Simone; Chamberlain, Neil; Durden, Stephen; Fung, Andy; Sanchez-Barbetty, Mauricio; Thrivikraman, Tushar
2013-01-01
The National Resource Council’s Earth Science Decadal Survey” (NRCDS) has identified the Aerosol/Climate/Ecosystems (ACE) Mission as a priority mission for NASA Earth science. The NRC recommended the inclusion of "a cross-track scanning cloud radar with channels at 94 GHz and possibly 34 GHz for measurement of cloud droplet size, glaciation height, and cloud height". Several radar concepts have been proposed that meet some of the requirements of the proposed ACE mission but none have provided scanning capability at both 34 and 94 GHz due to the challenge of constructing scanning antennas at 94 GHz. In this paper, we will describe a radar design that leverages new developments in microwave monolithic integrated circuits (MMICs) and micro-machining to enable an electronically-scanned radar with both Ka-band (35 GHz) and W-band (94-GHz) channels. This system uses a dual-frequency linear active electronically-steered array (AESA) combined with a parabolic cylindrical reflector. This configuration provides a large aperture (3m x 5m) with electronic-steering but is much simpler than a two-dimension AESA of similar size. Still, the W-band frequency requires element spacing of approximately 2.5 mm, presenting significant challenges for signal routing and incorporation of MMICs. By combining (Gallium Nitride) GaN MMIC technology with micro-machined radiators and interconnects and silicon-germanium (SiGe) beamforming MMICs, we are able to meet all the performance and packaging requirements of the linear array feed and enable simultaneous scanning of Ka-band and W-band radars over swath of up to 100 km.
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.L.
A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less
Intelligent image processing for machine safety
NASA Astrophysics Data System (ADS)
Harvey, Dennis N.
1994-10-01
This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Means and method of balancing multi-cylinder reciprocating machines
Corey, John A.; Walsh, Michael M.
1985-01-01
A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.
30 CFR 57.14115 - Stationary grinding machines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Stationary grinding machines. 57.14115 Section... and Equipment Safety Devices and Maintenance Requirements § 57.14115 Stationary grinding machines. Stationary grinding machines, other than special bit grinders, shall be equipped with— (a) Peripheral hoods...
30 CFR 77.401 - Stationary grinding machines; protective devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Stationary grinding machines; protective... OF UNDERGROUND COAL MINES Safeguards for Mechanical Equipment § 77.401 Stationary grinding machines; protective devices. (a) Stationary grinding machines other than special bit grinders shall be equipped with...
30 CFR 56.14115 - Stationary grinding machines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Stationary grinding machines. 56.14115 Section... Equipment Safety Devices and Maintenance Requirements § 56.14115 Stationary grinding machines. Stationary grinding machines, other than special bit grinders, shall be equipped with— (a) Peripheral hoods capable of...
30 CFR 77.401 - Stationary grinding machines; protective devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Stationary grinding machines; protective... OF UNDERGROUND COAL MINES Safeguards for Mechanical Equipment § 77.401 Stationary grinding machines; protective devices. (a) Stationary grinding machines other than special bit grinders shall be equipped with...
30 CFR 56.14115 - Stationary grinding machines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Stationary grinding machines. 56.14115 Section... Equipment Safety Devices and Maintenance Requirements § 56.14115 Stationary grinding machines. Stationary grinding machines, other than special bit grinders, shall be equipped with— (a) Peripheral hoods capable of...
30 CFR 57.14115 - Stationary grinding machines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Stationary grinding machines. 57.14115 Section... and Equipment Safety Devices and Maintenance Requirements § 57.14115 Stationary grinding machines. Stationary grinding machines, other than special bit grinders, shall be equipped with— (a) Peripheral hoods...
30 CFR 75.1723 - Stationary grinding machines; protective devices.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Stationary grinding machines; protective....1723 Stationary grinding machines; protective devices. (a) Stationary grinding machines other than... the wheel. (3) Safety washers. (b) Grinding wheels shall be operated within the specifications of the...
30 CFR 75.1723 - Stationary grinding machines; protective devices.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Stationary grinding machines; protective....1723 Stationary grinding machines; protective devices. (a) Stationary grinding machines other than... the wheel. (3) Safety washers. (b) Grinding wheels shall be operated within the specifications of the...
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).
Principle of Magnetodynamics for Composite Magnetic Pole
NASA Astrophysics Data System (ADS)
Animalu, Alexander
2014-03-01
It is shown in this paper that geometry provides the key to the new magnetodynamics principle of operation of the machine (invented by Dr. Ezekiel Izuogu) which has an unexpected feature of driving a motor with static magnetic field. Essentially, because an array of like magnetic poles of the machine is arranged in a half circular array of a cylindrical geometry, the array creates a non-pointlike magnet pole that may be represented by a ``magnetic current loop'' at the position of the pivot of the movable arm. As a result, in three-dimensional space, it is possible to characterize the symmetry of the stator magnetic field B and the magnetic current loop J as a cube-hexagon system by a 6-vector (J,B) (with J.B ≠0) comprising a 4x4 antisymmetric tensor analogous to the conventional electric and magnetic 6-vector (E,B) (with E.B ≠0) comprising the 4x4 antisymmetric tensor of classical electrodynamics The implications are discussed. Supported by International Centre for Basic Research, Abuja, Nigeria.
Triboelectrification based motion sensor for human-machine interfacing.
Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin
2014-05-28
We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.
NASA Astrophysics Data System (ADS)
Sökmen, Ü.; Stranz, A.; Waag, A.; Ababneh, A.; Seidel, H.; Schmid, U.; Peiner, E.
2010-06-01
We report on a micro-machined resonator for mass sensing applications which is based on a silicon cantilever excited with a sputter-deposited piezoelectric aluminium nitride (AlN) thin film actuator. An inductively coupled plasma (ICP) cryogenic dry etching process was applied for the micro-machining of the silicon substrate. A shift in resonance frequency was observed, which was proportional to a mass deposited in an e-beam evaporation process on top. We had a mass sensing limit of 5.2 ng. The measurements from the cantilevers of the two arrays revealed a quality factor of 155-298 and a mass sensitivity of 120.34 ng Hz-1 for the first array, and a quality factor of 130-137 and a mass sensitivity of 104.38 ng Hz-1 for the second array. Furthermore, we managed to fabricate silicon cantilevers, which can be improved for the detection in the picogram range due to a reduction of the geometrical dimensions.
An implantable integrated low-power amplifier-microelectrode array for Brain-Machine Interfaces.
Patrick, Erin; Sankar, Viswanath; Rowe, William; Sanchez, Justin C; Nishida, Toshikazu
2010-01-01
One of the important challenges in designing Brain-Machine Interfaces (BMI) is to build implantable systems that have the ability to reliably process the activity of large ensembles of cortical neurons. In this paper, we report the design, fabrication, and testing of a polyimide-based microelectrode array integrated with a low-power amplifier as part of the Florida Wireless Integrated Recording Electrode (FWIRE) project at the University of Florida developing a fully implantable neural recording system for BMI applications. The electrode array was fabricated using planar micromachining MEMS processes and hybrid packaged with the amplifier die using a flip-chip bonding technique. The system was tested both on bench and in-vivo. Acute and chronic neural recordings were obtained from a rodent for a period of 42 days. The electrode-amplifier performance was analyzed over the chronic recording period with the observation of a noise floor of 4.5 microVrms, and an average signal-to-noise ratio of 3.8.
30 CFR 18.49 - Connection boxes on machines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Connection boxes on machines. 18.49 Section 18.49 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Construction and...
30 CFR 18.61 - Final inspection of complete machine.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Final inspection of complete machine. 18.61 Section 18.61 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Inspections...
A tubular hybrid Halbach/axially-magnetized permanent-magnet linear machine
NASA Astrophysics Data System (ADS)
Sui, Yi; Liu, Yong; Cheng, Luming; Liu, Jiaqi; Zheng, Ping
2017-05-01
A single-phase tubular permanent-magnet linear machine (PMLM) with hybrid Halbach/axially-magnetized PM arrays is proposed for free-piston Stirling power generation system. Machine topology and operating principle are elaborately illustrated. With the sinusoidal speed characteristic of the free-piston Stirling engine considered, the proposed machine is designed and calculated by finite-element analysis (FEA). The main structural parameters, such as outer radius of the mover, radial length of both the axially-magnetized PMs and ferromagnetic poles, axial length of both the middle and end radially-magnetized PMs, etc., are optimized to improve both the force capability and power density. Compared with the conventional PMLMs, the proposed machine features high mass and volume power density, and has the advantages of simple control and low converter cost. The proposed machine topology is applicable to tubular PMLMs with any phases.
Microstereolithography: A Review
2003-04-01
initiator Table I. Characteristics of the integral microstereolithography machines described by Bertsch, Chatwin and Loubere. b) Digital Micromirror ...DeviceTM as pattern generator The Digital Micromirror Device (DMDTM) produced by Texas Instruments, which is an array of micromirrors actuated by...feasibility of the technology, an array of micromirrors having a VGA resolution (640 x 480) was used in a first prototype developed to work with visible
Automated Handling of Garments for Pressing
1991-09-30
Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering
Fine-tunable plasma nano-machining for fabrication of 3D hollow nanostructures: SERS application
NASA Astrophysics Data System (ADS)
Mehrvar, L.; Hajihoseini, H.; Mahmoodi, H.; Tavassoli, S. H.; Fathipour, M.; Mohseni, S. M.
2017-08-01
Novel processing sequences for the fabrication of artificial nanostructures are in high demand for various applications. In this paper, we report on a fine-tunable nano-machining technique for the fabrication of 3D hollow nanostructures. This technique originates from redeposition effects occurring during Ar dry etching of nano-patterns. Different geometries of honeycomb, double ring, nanotube, cone and crescent arrays have been successfully fabricated from various metals such as Au, Ag, Pt and Ti. The geometrical parameters of the 3D hollow nanostructures can be straightforwardly controlled by tuning the discharge plasma pressure and power. The structure and morphology of nanostructures are probed using atomic force microscopy (AFM), scanning electron microscopy (SEM), optical emission spectroscopy (OES) and energy dispersive x-ray spectroscopy (EDS). Finally, a Ag nanotube array was assayed for application in surface enhanced Raman spectroscopy (SERS), resulting in an enhancement factor (EF) of 5.5 × 105, as an experimental validity proof consistent with the presented simulation framework. Furthermore, it was found that the theoretical EF value for the honeycomb array is in the order of 107, a hundred times greater than that found in nanotube array.
Performance evaluation of coherent Ising machines against classical neural networks
NASA Astrophysics Data System (ADS)
Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa
2017-12-01
The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.
30 CFR 18.21 - Machines equipped with powered dust collectors.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Machines equipped with powered dust collectors. 18.21 Section 18.21 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES...
Investigations on high speed machining of EN-353 steel alloy under different machining environments
NASA Astrophysics Data System (ADS)
Venkata Vishnu, A.; Jamaleswara Kumar, P.
2018-03-01
The addition of Nano Particles into conventional cutting fluids enhances its cooling capabilities; in the present paper an attempt is made by adding nano sized particles into conventional cutting fluids. Taguchi Robust Design Methodology is employed in order to study the performance characteristics of different turning parameters i.e. cutting speed, feed rate, depth of cut and type of tool under different machining environments i.e. dry machining, machining with lubricant - SAE 40 and machining with mixture of nano sized particles of Boric acid and base fluid SAE 40. A series of turning operations were performed using L27 (3)13 orthogonal array, considering high cutting speeds and the other machining parameters to measure hardness. The results are compared among the different machining environments, and it is concluded that there is considerable improvement in the machining performance using lubricant SAE 40 and mixture of SAE 40 + boric acid compared with dry machining. The ANOVA suggests that the selected parameters and the interactions are significant and cutting speed has most significant effect on hardness.
Anatomical entity mention recognition at literature scale
Pyysalo, Sampo; Ananiadou, Sophia
2014-01-01
Motivation: Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. Results: We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. Availability and implementation: All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. Contact: sophia.ananiadou@manchester.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24162468
30 CFR 18.21 - Machines equipped with powered dust collectors.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Machines equipped with powered dust collectors... Construction and Design Requirements § 18.21 Machines equipped with powered dust collectors. Powered dust collectors on machines submitted for approval shall meet the applicable requirements of Part 33 of this...
30 CFR 18.49 - Connection boxes on machines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Connection boxes on machines. 18.49 Section 18... Design Requirements § 18.49 Connection boxes on machines. Connection boxes used to facilitate replacement of cables or machine components shall be explosion-proof. Portable-cable terminals on cable reels...
30 CFR 56.14107 - Moving machine parts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Moving machine parts. 56.14107 Section 56.14107... Safety Devices and Maintenance Requirements § 56.14107 Moving machine parts. (a) Moving machine parts... takeup pulleys, flywheels, couplings, shafts, fan blades, and similar moving parts that can cause injury...
30 CFR 57.14107 - Moving machine parts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Moving machine parts. 57.14107 Section 57.14107... Equipment Safety Devices and Maintenance Requirements § 57.14107 Moving machine parts. (a) Moving machine..., and takeup pulleys, flywheels, coupling, shafts, fan blades; and similar moving parts that can cause...
30 CFR 18.22 - Boring-type machines equipped for auxiliary face ventilation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Boring-type machines equipped for auxiliary... AND ACCESSORIES Construction and Design Requirements § 18.22 Boring-type machines equipped for auxiliary face ventilation. Each boring-type continuous-mining machine that is submitted for approval shall...
30 CFR 18.22 - Boring-type machines equipped for auxiliary face ventilation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Boring-type machines equipped for auxiliary... AND ACCESSORIES Construction and Design Requirements § 18.22 Boring-type machines equipped for auxiliary face ventilation. Each boring-type continuous-mining machine that is submitted for approval shall...
30 CFR 18.22 - Boring-type machines equipped for auxiliary face ventilation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Boring-type machines equipped for auxiliary... AND ACCESSORIES Construction and Design Requirements § 18.22 Boring-type machines equipped for auxiliary face ventilation. Each boring-type continuous-mining machine that is submitted for approval shall...
30 CFR 18.22 - Boring-type machines equipped for auxiliary face ventilation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Boring-type machines equipped for auxiliary... AND ACCESSORIES Construction and Design Requirements § 18.22 Boring-type machines equipped for auxiliary face ventilation. Each boring-type continuous-mining machine that is submitted for approval shall...
30 CFR 18.22 - Boring-type machines equipped for auxiliary face ventilation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Boring-type machines equipped for auxiliary... AND ACCESSORIES Construction and Design Requirements § 18.22 Boring-type machines equipped for auxiliary face ventilation. Each boring-type continuous-mining machine that is submitted for approval shall...
Micro-machined high-frequency (80 MHz) PZT thick film linear arrays.
Zhou, Qifa; Wu, Dawei; Liu, Changgeng; Zhu, Benpeng; Djuth, Frank; Shung, K
2010-10-01
This paper presents the development of a micromachined high-frequency linear array using PZT piezoelectric thick films. The linear array has 32 elements with an element width of 24 μm and an element length of 4 mm. Array elements were fabricated by deep reactive ion etching of PZT thick films, which were prepared from spin-coating of PZT sol-gel composite. Detailed fabrication processes, especially PZT thick film etching conditions and a novel transferring-and-etching method, are presented and discussed. Array designs were evaluated by simulation. Experimental measurements show that the array had a center frequency of 80 MHz and a fractional bandwidth (-6 dB) of 60%. An insertion loss of -41 dB and adjacent element crosstalk of -21 dB were found at the center frequency.
Design and fabrication of a flexible substrate microelectrode array for brain machine interfaces.
Patrick, Erin; Ordonez, Matthew; Alba, Nicolas; Sanchez, Justin C; Nishida, Toshikazu
2006-01-01
We report a neural microelectrode array design that leverages the recording properties of conventional microwire electrode arrays with the additional features of precise control of the electrode geometries. Using microfabrication techniques, a neural probe array is fabricated that possesses a flexible polyimide-based cable. The performance of the design was tested with electrochemical impedance spectroscopy and in vivo studies. The gold-plated electrode site has an impedance value of 0.9 M Omega at 1 kHz. Acute neural recording provided high neuronal yields, peak-to-peak amplitudes (as high as 100 microV), and signal-to-noise ratios (27 dB).
NASA Astrophysics Data System (ADS)
Koten, V. K.; Tanamal, C. E.
2017-03-01
Manufacturing agricultural products by the farmers, people or person who involve in medium industry, small industry, and households industry still be done in separately. Although the power on primemover is enough, in operations, primemover was only to move one of several agricultural products machine. This study attempts to design and construct power transmition multi output with single primemover; a single construction that allows primemover move some agricultur products machine in the same or not. This study begins with the determination of production capacity and the power to destroy products, the determination of resources and rotation, normalization of resources and rotation, the determination of the type material used, the size determination of each machine elements, construction machine elements, and assemble machine elements into a construction multi output power transmition with single primemover on agricultural products machine. The results show that with a input normalization 4 PK (2984 Watt), rotation 2000 rpm, the strength of material 60 kg/mm2, and several operating consideration, thus obtained size of machine elements through calculation. Based on the size, the machine elements is made through the use of some machine tools and assembled to form a multi output power transmition with single primemover.
Multitasking runtime systems for the Cedar Multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzzi, M.D.
1986-07-01
The programming of a MIMD machine is more complex than for SISD and SIMD machines. The multiple computational resources of the machine must be made available to the programming language compiler and to the programmer so that multitasking programs may be written. This thesis will explore the additional complexity of programming a MIMD machine, the Cedar Multiprocessor specifically, and the multitasking runtime system necessary to provide multitasking resources to the user. First, the problem will be well defined: the Cedar machine, its operating system, the programming language, and multitasking concepts will be described. Second, a solution to the problem, calledmore » macrotasking, will be proposed. This solution provides multitasking facilities to the programmer at a very coarse level with many visible machine dependencies. Third, an alternate solution, called microtasking, will be proposed. This solution provides multitasking facilities of a much finer grain. This solution does not depend so rigidly on the specific architecture of the machine. Finally, the two solutions will be compared for effectiveness. 12 refs., 16 figs.« less
The Constellation-X Focal Plane Microcalorimeter Array: An NTD-Germanium Solution
NASA Technical Reports Server (NTRS)
Beeman, J.; Silver, E.; Bandler, S.; Schnopper, H.; Murray, S.; Madden, N.; Landis, D.; Haller, E. E.; Barbera, M.
2001-01-01
The hallmarks of Neutron Transmutation Doped (NTD) germanium cryogenic thermistors include high reliability, reproducibility, and long term stability of bulk carrier transport properties. Using micro-machined NTD Ge thermistors with integral 'flying' leads, we can now fabricate two-dimensional arrays that are built up from a series of stacked linear arrays. We believe that this modular approach of building, assembling, and perhaps replacing individual modules of detectors is essential to the successful fabrication and testing of large multi-element instruments. Details of construction are presented.
Microtube strip heat exchanger
NASA Astrophysics Data System (ADS)
Doty, F. D.
1991-07-01
During the last quarter, Doty Scientific, Inc. (DSI) continued to make progress on the microtube strip (MTS) heat exchanger. The DSI completed a heat exchanger stress analysis of the ten-module heat exchanger bank; and performed a shell-side flow inhomogeneity analysis of the three-module heat exchanger bank. The company produced 50 tubestrips using an in-house CNC milling machine and began pressing them onto tube arrays. The DSI revised some of the tooling required to encapsulate a tube array and press tubestrips into the array to improve some of the prototype tooling.
Optimal use of human and machine resources for Space Station assembly operations
NASA Technical Reports Server (NTRS)
Parrish, Joseph C.
1988-01-01
This paper investigates the issues involved in determining the best mix of human and machine resources for assembly of the Space Station. It presents the current Station assembly sequence, along with descriptions of the available assembly resources. A number of methodologies for optimizing the human/machine tradeoff problem have been developed, but the Space Station assembly offers some unique issues that have not yet been addressed. These include a strong constraint on available EVA time for early flights and a phased deployment of assembly resources over time. A methodology for incorporating the previously developed decision methods to the special case of the Space Station is presented. This methodology emphasizes an application of multiple qualitative and quantitative techniques, including simulation and decision analysis, for producing an objective, robust solution to the tradeoff problem.
A gas-sensing array produced from screen-printed, zeolite-modified chromium titanate
NASA Astrophysics Data System (ADS)
Pugh, David C.; Hailes, Stephen M. V.; Parkin, Ivan P.
2015-08-01
Metal oxide semiconducting (MOS) gas sensors represent a cheap, robust and sensitive technology for detecting volatile organic compounds. MOS sensors have consistently been shown to lack sensitivity to a broad range on analytes, leading to false positive errors. In this study an array of five chromium titanate (CTO) thick-film sensors were produced. These were modified by incorporating a range of zeolites, namely β, Y, mordenite and ZSM5, into the bulk sensor material. Sensors were exposed to three common reducing gases, namely acetone, ethanol and toluene, and a machine learning technique was applied to differentiate between the different gases. All sensors produced strong resistive responses (increases in resistance) and a support vector machine (SVM) was able to classify the data to a high degree of selectivity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Grounding offtrack direct-current machines and...-UNDERGROUND COAL MINES Grounding § 75.703 Grounding offtrack direct-current machines and the enclosures of related detached components. [Statutory Provisions] The frames of all offtrack direct-current machines and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Grounding offtrack direct-current machines and...-UNDERGROUND COAL MINES Grounding § 75.703 Grounding offtrack direct-current machines and the enclosures of related detached components. [Statutory Provisions] The frames of all offtrack direct-current machines and...
30 CFR 75.205 - Installation of roof support using mining machines with integral roof bolters.
Code of Federal Regulations, 2011 CFR
2011-07-01
... machines with integral roof bolters. 75.205 Section 75.205 Mineral Resources MINE SAFETY AND HEALTH... Roof Support § 75.205 Installation of roof support using mining machines with integral roof bolters. When roof bolts are installed by a continuous mining machine with intregal roof bolting equipment: (a...
Effect of the Machined Surfaces of AISI 4337 Steel to Cutting Conditions on Dry Machining Lathe
NASA Astrophysics Data System (ADS)
Rahim, Robbi; Napid, Suhardi; Hasibuan, Abdurrozzaq; Rahmah Sibuea, Siti; Yusmartato, Y.
2018-04-01
The objective of the research is to obtain a cutting condition which has a good chance of realizing dry machining concept on AISI 4337 steel material by studying surface roughness, microstructure and hardness of machining surface. The data generated from the experiment were then processed and analyzed using the standard Taguchi method L9 (34) orthogonal array. Testing of dry and wet machining used surface test and micro hardness test for each of 27 test specimens. The machining results of the experiments showed that average surface roughness (Raavg) was obtained at optimum cutting conditions when VB 0.1 μm, 0.3 μm and 0.6 μm respectively 1.467 μm, 2.133 μm and 2,800 μm fo r dry machining while which was carried out by wet machining the results obtained were 1,833 μm, 2,667 μm and 3,000 μm. It can be concluded that dry machining provides better surface quality of machinery results than wet machining. Therefore, dry machining is a good choice that may be realized in the manufacturing and automotive industries.
Programmable Pulse-Position-Modulation Encoder
NASA Technical Reports Server (NTRS)
Zhu, David; Farr, William
2006-01-01
A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.
Nanowire nanocomputer as a finite-state machine.
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F; Ellenbogen, James C; Lieber, Charles M
2014-02-18
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom-up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future.
Nanowire nanocomputer as a finite-state machine
Yao, Jun; Yan, Hao; Das, Shamik; Klemic, James F.; Ellenbogen, James C.; Lieber, Charles M.
2014-01-01
Implementation of complex computer circuits assembled from the bottom up and integrated on the nanometer scale has long been a goal of electronics research. It requires a design and fabrication strategy that can address individual nanometer-scale electronic devices, while enabling large-scale assembly of those devices into highly organized, integrated computational circuits. We describe how such a strategy has led to the design, construction, and demonstration of a nanoelectronic finite-state machine. The system was fabricated using a design-oriented approach enabled by a deterministic, bottom–up assembly process that does not require individual nanowire registration. This methodology allowed construction of the nanoelectronic finite-state machine through modular design using a multitile architecture. Each tile/module consists of two interconnected crossbar nanowire arrays, with each cross-point consisting of a programmable nanowire transistor node. The nanoelectronic finite-state machine integrates 180 programmable nanowire transistor nodes in three tiles or six total crossbar arrays, and incorporates both sequential and arithmetic logic, with extensive intertile and intratile communication that exhibits rigorous input/output matching. Our system realizes the complete 2-bit logic flow and clocked control over state registration that are required for a finite-state machine or computer. The programmable multitile circuit was also reprogrammed to a functionally distinct 2-bit full adder with 32-set matched and complete logic output. These steps forward and the ability of our unique design-oriented deterministic methodology to yield more extensive multitile systems suggest that proposed general-purpose nanocomputers can be realized in the near future. PMID:24469812
Dynamic data analysis of climate and recharge conditions over time in the Edwards Aquifer, Texas
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Collins, J.; Banner, J.
2017-12-01
Understanding the temporal patterns in datasets related to climate, recharge, and water resource conditions is important for informing water management and policy decisions. Data analysis and pipelines for evaluating these disparate sources of information are challenging to set up and rely on emerging informatics tools to complete. This project gathers data from both historical and recent sources for the Edwards Aquifer of central Texas. The Edwards faces a unique array of challenges, as it is composed of karst limestone, is susceptible to contaminants and climate change, and is expected to supply water for a rapidly growing population. Given these challenges, new approaches to integrating data will be particularly important. Case study data from the Edwards is used to evaluate aquifer and hydrologic system conditions over time as well as to discover patterns and possible relationships across the information sources. Prior research that evaluated trends in discharge and recharge of the aquifer is revisited by considering new data from 1992-2015, and the sustainability of the Edwards as a water resource within the more recent time period is addressed. Reusable and shareable analytical data pipelines are constructed using Jupyter Notebooks and Python libraries, and an interactive visualization is implemented with the information. In addition to the data sources that are utilized for the water balance analyses, the Global Surface Water Monitoring System from the University of Minnesota, a tool that integrates a wide number of satellite datasets with known surface water dynamics and machine learning, is used to evaluate water body persistence and change over time at regional scales. Preliminary results indicate that surface water body over the Edwards with differing aerial extents are declining, excepting some dam-controlled lakes in the region. Other existing tools and machine learning applications are also considered. Results are useful to the Texas Water Research Network and provide a reproducible geoinformatics approach to integrated data analysis for water resources at regional scales.
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
Cellular automata in photonic cavity arrays.
Li, Jing; Liew, T C H
2016-10-31
We propose theoretically a photonic Turing machine based on cellular automata in arrays of nonlinear cavities coupled with artificial gauge fields. The state of the system is recorded making use of the bistability of driven cavities, in which losses are fully compensated by an external continuous drive. The sequential update of the automaton layers is achieved automatically, by the local switching of bistable states, without requiring any additional synchronization or temporal control.
Numerical aerodynamic simulation facility preliminary study, volume 2 and appendices
NASA Technical Reports Server (NTRS)
1977-01-01
Data to support results obtained in technology assessment studies are presented. Objectives, starting points, and future study tasks are outlined. Key design issues discussed in appendices include: data allocation, transposition network design, fault tolerance and trustworthiness, logic design, processing element of existing components, number of processors, the host system, alternate data base memory designs, number representation, fast div 521 instruction, architectures, and lockstep array versus synchronizable array machine comparison.
Advanced Concepts Theory Annual Report 1989
1990-03-29
kinetic energy to x-ray conversion and are being evaluated using nickel array implosion calculations. iv o Maxwell Laboratory aluminum array implosion...general, we need to evaluate the degree of machine PRS decoupling produced by runaway electrons, and the existence of a corona may be a relevant aspect of...the tools necessary to carry out data analysis and interpretation and (4) promote the design and evaluation of new experiments and new improved loads
A Brief Description of My Projects
NASA Technical Reports Server (NTRS)
Barnes, Tobin
2016-01-01
My internship was in the IDC which consist of a machine shop and an array of design space. During my tour I worked on a wide variety of projects some of which included design, research, machining and fabrication. I gained further knowledge on some machines that I have had prior experience on such as the lathe and Hurco CNC machines. The first thing we did was complete our checkout in the machine shop which went pretty well, since I was already familiar with most of the machines. I also did a couple of practice parts on some of the machines, I made a name block on the CNC machine and I also used the vertical milling machine to complete this project. One of the other projects that I did was machine a hammer with my initials with the use of the lathe and CNC machine, this project took much longer since I had to set up a cylindrical piece on the CNC machine. The first project that I began work on was the Systems Engineering & Management Advancement Program (SEPMAP) Hexacopter project and helped them to assemble and modify one of their particle capture doors on their boxes. After a while we ended up helping them make a hinge and holes to reduce the weight of their design. We helped the NASA Extreme Environment Mission Operations (NEEMO) team a bit with some of their name tags and assembly of some of their underwater parts. One of the more challenging projects was a rail that came in with a rather weirdly drawn part. The biggest project that I worked on was the solar array project. Which consisted of a variety of machining and 3D printing and it took me about 3 different times of re-designing to come up with a final prototype. Along with this project I also had to complete a project in which I had to modify a thermos. This was rather simple since I just had to draw up a part and print it out on the 3D printer. I also learned how to use Pro E/Creo parametric to design a square block and print it on the 3D Printer. All of these projects increased my experience on all of the machines and equipment that I used. I also got to tweak my design skills and better understand how to modify my designs and how to improve those specific designs.
Ordered array of CoPc-vacancies filled with single-molecule rotors
NASA Astrophysics Data System (ADS)
Xie, Zheng-Bo; Wang, Ya-Li; Tao, Min-Long; Sun, Kai; Tu, Yu-Bing; Yuan, Hong-Kuan; Wang, Jun-Zhong
2018-05-01
We report the highly ordered array of CoPc-vacancies and the single-molecule rotors inside the vacancies. When CoPc molecules are deposited on Cd(0001) at low-temperature, three types of molecular vacancies appeared randomly in the CoPc monolayer. Annealing the sample to higher temperature leads to the spontaneous phase separation and self-organized arrangement of the vacancies. Highly ordered arrays of two-molecule vacancies and single-molecule vacancies have been obtained. In particular, there is a rotating CoPc molecule inside each single-molecule vacancy, which constitutes the array of single-molecule rotors. These results provide a new routine to fabricate the nano-machines on a large scale.
A 500 megabyte/second disk array
NASA Technical Reports Server (NTRS)
Ruwart, Thomas M.; Okeefe, Matthew T.
1994-01-01
Applications at the Army High Performance Computing Research Center's (AHPCRC) Graphic and Visualization Laboratory (GVL) at the University of Minnesota require a tremendous amount of I/O bandwidth and this appetite for data is growing. Silicon Graphics workstations are used to perform the post-processing, visualization, and animation of multi-terabyte size datasets produced by scientific simulations performed of AHPCRC supercomputers. The M.A.X. (Maximum Achievable Xfer) was designed to find the maximum achievable I/O performance of the Silicon Graphics CHALLENGE/Onyx-class machines that run these applications. Running a fully configured Onyx machine with 12-150MHz R4400 processors, 512MB of 8-way interleaved memory, 31 fast/wide SCSI-2 channel each with a Ciprico disk array controller we were able to achieve a maximum sustained transfer rate of 509.8 megabytes per second. However, after analyzing the results it became clear that the true maximum transfer rate is somewhat beyond this figure and we will need to do further testing with more disk array controllers in order to find the true maximum.
Solar Power Satellites: Reconsideration as Renewable Energy Source Based on Novel Approaches
NASA Astrophysics Data System (ADS)
Ellery, Alex
2017-04-01
Solar power satellites (SPS) are a solar energy generation mechanism that captures solar energy in space and converts this energy into microwave for transmission to Earth-based rectenna arrays. They offer a constant, high integrated energy density of 200 W/m2 compared to <10 W/m2 for other renewable energy sources. Despite this promise as a clean energy source, SPS have been relegated out of consideration due to their enormous cost and technological challenge. It has been suggested that for solar power satellites to become economically feasible, launch costs must decrease from their current 20,000/kg to <200/kg. Even with the advent of single-stage-to-orbit launchers which propose launch costs dropping to 2,000/kg, this will not be realized. Yet, the advantages of solar power satellites are many including the provision of stable baseload power. Here, I present a novel approach to reduce the specific cost of solar power satellites to 1/kg by leveraging two enabling technologies - in-situ resource utilization of lunar material and 3D printing of this material. Specifically, we demonstrate that electric motors may be constructed from lunar material through 3D printing representing a major step towards the development of self-replicating machines. Such machines have the capacity to build solar power satellites on the Moon, thereby bypassing the launch cost problem. The productive capacity of self-replicating machines favours the adoption of large constellations of small solar power satellites. This opens up additional clean energy options for combating climate change by meeting the demands for future global energy.
Micro-optical fabrication by ultraprecision diamond machining and precision molding
NASA Astrophysics Data System (ADS)
Li, Hui; Li, Likai; Naples, Neil J.; Roblee, Jeffrey W.; Yi, Allen Y.
2017-06-01
Ultraprecision diamond machining and high volume molding for affordable high precision high performance optical elements are becoming a viable process in optical industry for low cost high quality microoptical component manufacturing. In this process, first high precision microoptical molds are fabricated using ultraprecision single point diamond machining followed by high volume production methods such as compression or injection molding. In the last two decades, there have been steady improvements in ultraprecision machine design and performance, particularly with the introduction of both slow tool and fast tool servo. Today optical molds, including freeform surfaces and microlens arrays, are routinely diamond machined to final finish without post machining polishing. For consumers, compression molding or injection molding provide efficient and high quality optics at extremely low cost. In this paper, first ultraprecision machine design and machining processes such as slow tool and fast too servo are described then both compression molding and injection molding of polymer optics are discussed. To implement precision optical manufacturing by molding, numerical modeling can be included in the future as a critical part of the manufacturing process to ensure high product quality.
NASA Astrophysics Data System (ADS)
Pongs, Guido; Bresseler, Bernd; Bergs, Thomas; Menke, Gert
2012-10-01
Today isothermal precision molding of imaging glass optics has become a widely applied and integrated production technology in the optical industry. Especially in consumer electronics (e.g. digital cameras, mobile phones, Blu-ray) a lot of optical systems contain rotationally symmetrical aspherical lenses produced by precision glass molding. But due to higher demands on complexity and miniaturization of optical elements the established process chain for precision glass molding is not sufficient enough. Wafer based molding processes for glass optics manufacturing become more and more interesting for mobile phone applications. Also cylindrical lens arrays can be used in high power laser systems. The usage of unsymmetrical free-form optics allows an increase of efficiency in optical laser systems. Aixtooling is working on different aspects in the fields of mold manufacturing technologies and molding processes for extremely high complex optical components. In terms of array molding technologies, Aixtooling has developed a manufacturing technology for the ultra-precision machining of carbide molds together with European partners. The development covers the machining of multi lens arrays as well as cylindrical lens arrays. The biggest challenge is the molding of complex free-form optics having no symmetrical axis. A comprehensive CAD/CAM data management along the entire process chain is essential to reach high accuracies on the molded lenses. Within a national funded project Aixtooling is working on a consistent data handling procedure in the process chain for precision molding of free-form optics.
Reverse engineering of wörner type drilling machine structure.
NASA Astrophysics Data System (ADS)
Wibowo, A.; Belly, I.; llhamsyah, R.; Indrawanto; Yuwana, Y.
2018-03-01
A product design needs to be modified based on the conditions of production facilities and existing resource capabilities without reducing the functional aspects of the product itself. This paper describes the reverse engineering process of the main structure of the wörner type drilling machine to obtain a machine structure design that can be made by resources with limited ability by using simple processes. Some structural, functional and the work mechanism analyzes have been performed to understand the function and role of each basic components. The process of dismantling of the drilling machine and measuring each of the basic components was performed to obtain sets of the geometry and size data of each component. The geometric model of each structure components and the machine assembly were built to facilitate the simulation process and machine performance analysis that refers to ISO standard of drilling machine. The tolerance stackup analysis also performed to determine the type and value of geometrical and dimensional tolerances, which could affect the ease of the components to be manufactured and assembled
ERIC Educational Resources Information Center
Matthews, Joseph R.
This study recommends a variety of actions to create and maintain a Montana union catalog (MONCAT) for more effective usage of in-state resources and library funds. Specifically, it advocates (1) merger of existing COM, machine readable bibliographic records, and OCLC tapes into a single microform catalog; (2) acceptance of only machine readable…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James W.
This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less
NASA Astrophysics Data System (ADS)
Maury, P.; Calamy, H.; Grunenwald, J.; Lassalle, F.; Zucchini, F.; Loyen, A.; Georges, A.; Morell, A.; Bedoch, J. P.
2009-01-01
The Sphinx machine[1] is a 6 MA, 1 μS driver based on the LTD technology, used for Z-pinch experiments. Important improvements of Sphinx radiation output were recently obtained using a multi-microsecond current prepulse[2]. Total power per unit of length is multiplied by a factor of 6 and FWHM divided by a factor of 2.5. Early breakdown of the wires during the prepulse phase dramatically changes the ablation phase leading to an improvement of axial homogeneity of both the implosion and the final radiating column. As a consequence, the cathode bubble observed on classical shots is definitively removed. The implosion is then centered and zippering effect is reduced, leading to simultaneous x-ray emission of the whole length. A great reproducibility is obtained. Nested arrays were used before to mitigate the Rayleigh-Taylor instabilities during the implosion phase. Further experiments with pre-pulse technique are described here were inner array was removed. The goal of these experiments was to see if long prepulse could give stable enough implosion with single array and at the same time increase the η parameter by reducing the mass of the load. Experimental results of single wire array loads of typical dimension 5 cm in height with implosion time between 700 and 900 ns and diameter varying between 80 and 140 mm are given. Parameters of the loads were varying in term of radius and number of wires. Comparisons with nested wire array loads are done and trends are proposed. Characteristics of both the implosion and the final radiating column are shown. 2D MHD numerical simulations of single wire array become easier as there is no interaction between outer and inner array anymore. A systematic study was done using injection mass model to benchmark simulation with experiments.
Management of business economic growth as function of resource rents
NASA Astrophysics Data System (ADS)
Prljić, Stefan; Nikitović, Zorana; Stojanović, Aleksandra Golubović; Cogoljević, Dušan; Pešić, Gordana; Alizamir, Meysam
2018-02-01
Economic profit could be influenced by economic rents. However natural resource rents provided different impact on the economic growth or economic profit. The main focus of the study was to evaluate the economic growth as function of natural resource rents. For such a purpose machine learning approach, artificial neural network, was used. The used natural resource rents were coal rents, forest rents, mineral rents, natural gas rents and oil rents. Based on the results it is concluded that the machine learning approach could be used as the tool for the economic growth evaluation as function of natural resource rents. Moreover the more advanced approaches should be incorporated to improve more the forecasting accuracy.
1990-08-01
12 The smallest regions defined by the superposition of the rexel boundaries of all the frames will be referred to as unisource regions. 85I I Chapter... unisource region are identical. I The advantage of rexel formatted data is its small size. However, the storage of rexel data in a uniform two...dimensional array is difficult because unisource regions can take on a wide variety of shapes. Rexel data can be stored in thinned uniform arrays, but this
DOT National Transportation Integrated Search
1974-08-01
Volume 3 describes the methodology for man-machine task allocation. It contains a description of man and machine performance capabilities and an explanation of the methodology employed to allocate tasks to human or automated resources. It also presen...
NASA Technical Reports Server (NTRS)
Thomson, F.
1975-01-01
Two tasks of machine processing of S-192 multispectral scanner data are reviewed. In the first task, the effects of changing atmospheric and base altitude on the ability to machine-classify agricultural crops were investigated. A classifier and atmospheric effects simulation model was devised and its accuracy verified by comparison of its predicted results with S-192 processed results. In the second task, land resource maps of a mountainous area near Cripple Creek, Colorado were prepared from S-192 data collected on 4 August 1973.
A comparative study of electrochemical machining process parameters by using GA and Taguchi method
NASA Astrophysics Data System (ADS)
Soni, S. K.; Thomas, B.
2017-11-01
In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.
Specification and preliminary design of an array processor
NASA Technical Reports Server (NTRS)
Slotnick, D. L.; Graham, M. L.
1975-01-01
The design of a computer suited to the class of problems typified by the general circulation of the atmosphere was investigated. A fundamental goal was that the resulting machine should have roughly 100 times the computing capability of an IBM 360/95 computer. A second requirement was that the machine should be programmable in a higher level language similar to FORTRAN. Moreover, the new machine would have to be compatible with the IBM 360/95 since the IBM machine would continue to be used for pre- and post-processing. A third constraint was that the cost of the new machine was to be significantly less than that of other extant machines of similar computing capability, such as the ILLIAC IV and CDC STAR. A final constraint was that it should be feasible to fabricate a complete system and put it in operation by early 1978. Although these objectives were generally met, considerable work remains to be done on the routing system.
Performance study of a data flow architecture
NASA Technical Reports Server (NTRS)
Adams, George
1985-01-01
Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.
A Novel Transverse Flux Machine for Vehicle Traction Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Zhao; Ahmed, Adeeb; Husain, Iqbal
2015-10-05
A novel transverse flux machine topology for electric vehicle traction application using ferrite magnets is presented in this paper. The proposed transverse flux topology utilizes novel magnet arrangements in the rotor that are similar to Halbach-array to boost flux linkage; on the stator side, cores are alternately arranged around a pair of ring windings in each phase to make use of the entire rotor flux that eliminates end windings. Analytical design considerations and finite element methods are used for an optimized design of a scooter in-wheel motor. Simulation results from Finite Element Analysis (FEA) show the motor achieved comparable torquemore » density to conventional rare-earth permanent magnet machines. This machine is a viable candidate for direct drive applications with low cost and high torque density.« less
30 CFR 18.96 - Preparation of machines for inspection; requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Preparation of machines for inspection... Field Approval of Electrically Operated Mining Equipment § 18.96 Preparation of machines for inspection; requirements. (a) Upon receipt of written notice from the Health and Safety District Manager of the time and...
30 CFR 18.96 - Preparation of machines for inspection; requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Preparation of machines for inspection... Field Approval of Electrically Operated Mining Equipment § 18.96 Preparation of machines for inspection; requirements. (a) Upon receipt of written notice from the Health and Safety District Manager of the time and...
30 CFR 18.96 - Preparation of machines for inspection; requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Preparation of machines for inspection... Field Approval of Electrically Operated Mining Equipment § 18.96 Preparation of machines for inspection; requirements. (a) Upon receipt of written notice from the Health and Safety District Manager of the time and...
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi
2014-02-07
Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.
NASA Astrophysics Data System (ADS)
Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi
2014-02-01
Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.
Manufacture of high aspect ratio micro-pillar wall shear stress sensor arrays
NASA Astrophysics Data System (ADS)
Gnanamanickam, Ebenezer P.; Sullivan, John P.
2012-12-01
In the field of experimental fluid mechanics the measurement of unsteady, distributed wall shear stress has proved historically challenging. Recently, sensors based on an array of flexible micro-pillars have shown promise in carrying out such measurements. Similar sensors find use in other applications such as cellular mechanics. This work presents a manufacturing technique that can manufacture micro-pillar arrays of high aspect ratio. An electric discharge machine (EDM) is used to manufacture a micro-drilling tool. This micro-drilling tool is used to form holes in a wax sheet which acts as the mold for the micro-pillar array. Silicone rubber is cast in these molds to yield a micro-pillar array. Using this technique, micro-pillar arrays with a maximum aspect ratio of about 10 have been manufactured. Manufacturing issues encountered, steps to alleviate them and the potential of the process to manufacture similar micro-pillar arrays in a time-efficient manner are also discussed.
Terahertz Array Receivers with Integrated Antennas
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam; Llombart, Nuria; Lee, Choonsup; Jung, Cecile; Lin, Robert; Cooper, Ken B.; Reck, Theodore; Siles, Jose; Schlecht, Erich; Peralta, Alessandro;
2011-01-01
Highly sensitive terahertz heterodyne receivers have been mostly single-pixel. However, now there is a real need of multi-pixel array receivers at these frequencies driven by the science and instrument requirements. In this paper we explore various receiver font-end and antenna architectures for use in multi-pixel integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies has progressed very well over the past few years. Novel stacking of micro-machined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages has made it possible to design multi-pixel heterodyne arrays. One of the critical technologies to achieve fully integrated system is the antenna arrays compatible with the receiver array architecture. In this paper we explore different receiver and antenna architectures for multi-pixel heterodyne and direct detector arrays for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.
Implementing the concurrent operation of sub-arrays in the ALMA correlator
NASA Astrophysics Data System (ADS)
Amestica, Rodrigo; Perez, Jesus; Lacasse, Richard; Saez, Alejandro
2016-07-01
The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.
Microstructured graphene arrays for highly sensitive flexible tactile sensors.
Zhu, Bowen; Niu, Zhiqiang; Wang, Hong; Leow, Wan Ru; Wang, Hua; Li, Yuangang; Zheng, Liyan; Wei, Jun; Huo, Fengwei; Chen, Xiaodong
2014-09-24
A highly sensitive tactile sensor is devised by applying microstructured graphene arrays as sensitive layers. The combination of graphene and anisotropic microstructures endows this sensor with an ultra-high sensitivity of -5.53 kPa(-1) , an ultra-fast response time of only 0.2 ms, as well as good reliability, rendering it promising for the application of tactile sensing in artificial skin and human-machine interface. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optical fabrication and testing; Proceedings of the Meeting, Singapore, Oct. 22-27, 1990
NASA Astrophysics Data System (ADS)
Lorenzen, Manfred; Campbell, Duncan R.; Johnson, Craig W.
1991-03-01
Various papers on optical fabrication and testing are presented. Individual topics addressed include: interferometry with laser diodes, new methods for economic production of prisms and lenses, interferometer accuracy and precision, optical testing with wavelength scanning interferometer, digital Talbot interferometer, high-sensitivity interferometric technique for strain measurements, absolute interferometric testing of spherical surfaces, contouring using gratings created on an LCD panel, three-dimensional inspection using laser-based dynamic fringe projection, noncontact optical microtopography, laser scan microscope and infrared laser scan microscope, photon scanning tunneling microscopy. Also discussed are: combination-matching problems in the layout design of minilaser rangefinder, design and testing of a cube-corner array for laser ranging, mode and far-field pattern of diode laser-phased arrays, new glasses for optics and optoelectronics, optical properties of Li-doped ZnO films, application and machining of Zerodur for optical purposes, finish machining of optical components in mass production.
Human-machine interface hardware: The next decade
NASA Technical Reports Server (NTRS)
Marcus, Elizabeth A.
1991-01-01
In order to understand where human-machine interface hardware is headed, it is important to understand where we are today, how we got there, and what our goals for the future are. As computers become more capable, faster, and programs become more sophisticated, it becomes apparent that the interface hardware is the key to an exciting future in computing. How can a user interact and control a seemingly limitless array of parameters effectively? Today, the answer is most often a limitless array of controls. The link between these controls and human sensory motor capabilities does not utilize existing human capabilities to their full extent. Interface hardware for teleoperation and virtual environments is now facing a crossroad in design. Therefore, we as developers need to explore how the combination of interface hardware, human capabilities, and user experience can be blended to get the best performance today and in the future.
Decoding grating orientation from microelectrode array recordings in monkey cortical area V4.
Manyakov, Nikolay V; Van Hulle, Marc M
2010-04-01
We propose an invasive brain-machine interface (BMI) that decodes the orientation of a visual grating from spike train recordings made with a 96 microelectrodes array chronically implanted into the prelunate gyrus (area V4) of a rhesus monkey. The orientation is decoded irrespective of the grating's spatial frequency. Since pyramidal cells are less prominent in visual areas, compared to (pre)motor areas, the recordings contain spikes with smaller amplitudes, compared to the noise level. Hence, rather than performing spike decoding, feature selection algorithms are applied to extract the required information for the decoder. Two types of feature selection procedures are compared, filter and wrapper. The wrapper is combined with a linear discriminant analysis classifier, and the filter is followed by a radial-basis function support vector machine classifier. In addition, since we have a multiclass classification problen, different methods for combining pairwise classifiers are compared.
A Broadband Micro-Machined Far-Infrared Absorber
NASA Technical Reports Server (NTRS)
Wollack, E. J.; Datesman, A. M.; Jhabvala, C. A.; Miller, K. H.; Quijada, M. A.
2016-01-01
The experimental investigation of a broadband far-infrared meta-material absorber is described. The observed absorptance is greater than 0.95 from 1 to 20 terahertz (300-15 microns) over a temperature range spanning 5-300 degrees Kelvin. The meta-material, realized from an array of tapers approximately 100 microns in length, is largely insensitive to the detailed geometry of these elements and is cryogenically compatible with silicon-based micro-machined technologies. The electromagnetic response is in general agreement with a physically motivated transmission line model.
Statistical Machine Learning for Structured and High Dimensional Data
2014-09-17
AFRL-OSR-VA-TR-2014-0234 STATISTICAL MACHINE LEARNING FOR STRUCTURED AND HIGH DIMENSIONAL DATA Larry Wasserman CARNEGIE MELLON UNIVERSITY Final...Re . 8-98) v Prescribed by ANSI Std. Z39.18 14-06-2014 Final Dec 2009 - Aug 2014 Statistical Machine Learning for Structured and High Dimensional...area of resource-constrained statistical estimation. machine learning , high-dimensional statistics U U U UU John Lafferty 773-702-3813 > Research under
ERIC Educational Resources Information Center
Sukwong, Orathai
2013-01-01
Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…
Coordinated Radar Resource Management for Networked Phased Array Radars
2014-12-01
Coordinated radar resource management for networked phased array radars Peter W. Moo and Zhen Ding Radar Sensing & Exploitation Section Defence...15] P.W. Moo . Scheduling for multifunction radar via two-slope benefit functions. Radar, Sonar Navigation, IET, 5(8):884 –894, Oct. 2011. [16] M.I
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Backshort-Under-Grid arrays for infrared astronomy
NASA Astrophysics Data System (ADS)
Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.
2006-04-01
We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.
Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination
NASA Technical Reports Server (NTRS)
Zenowich, Brian; Crowell, Adam; Townsend, William T.
2013-01-01
The design of machines that rely on arrays of servomotors such as robotic arms, orbital platforms, and combinations of both, imposes a heavy computational burden to coordinate their actions to perform coherent tasks. For example, the robotic equivalent of a person tracing a straight line in space requires enormously complex kinematics calculations, and complexity increases with the number of servo nodes. A new high-level architecture for coordinated servo-machine control enables a practical, distributed transputer alternative to conventional central processor electronics. The solution is inherently scalable, dramatically reduces bulkiness and number of conductor runs throughout the machine, requires only a fraction of the power, and is designed for cooling in a vacuum.
Novel Transverse Flux Machine for Vehicle Traction Applications: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Z.; Ahmed, A.; Husain, I.
2015-04-02
A novel transverse flux machine topology for electric vehicle traction applications using ferrite magnets is presented in this paper. The proposed transverse flux topology utilizes novel magnet arrangements in the rotor that are similar to the Halbach array to boost flux linkage; on the stator side, cores are alternately arranged around a pair of ring windings in each phase to make use of the entire rotor flux that eliminates end windings. Analytical design considerations and finite-element methods are used for an optimized design of a scooter in-wheel motor. Simulation results from finite element analysis (FEA) show that the motor achievedmore » comparable torque density to conventional rare-earth permanent magnet (PM) machines. This machine is a viable candidate for direct-drive applications with low cost and high torque density.« less
School Community Relations and Resources in Effective Schools.
ERIC Educational Resources Information Center
Michel, George J.
1985-01-01
Discusses resources available to schools operating as open and closed systems. Examines school/community relations and school effectiveness, schools as resource machines, and resources offered by teachers and parents. Stresses that broad concepts of community, good communication, and citizen involvement can utilize resources at high levels of…
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
Experimental Machine Learning of Quantum States
NASA Astrophysics Data System (ADS)
Gao, Jun; Qiao, Lu-Feng; Jiao, Zhi-Qiang; Ma, Yue-Chi; Hu, Cheng-Qiu; Ren, Ruo-Jing; Yang, Ai-Lin; Tang, Hao; Yung, Man-Hong; Jin, Xian-Min
2018-06-01
Quantum information technologies provide promising applications in communication and computation, while machine learning has become a powerful technique for extracting meaningful structures in "big data." A crossover between quantum information and machine learning represents a new interdisciplinary area stimulating progress in both fields. Traditionally, a quantum state is characterized by quantum-state tomography, which is a resource-consuming process when scaled up. Here we experimentally demonstrate a machine-learning approach to construct a quantum-state classifier for identifying the separability of quantum states. We show that it is possible to experimentally train an artificial neural network to efficiently learn and classify quantum states, without the need of obtaining the full information of the states. We also show how adding a hidden layer of neurons to the neural network can significantly boost the performance of the state classifier. These results shed new light on how classification of quantum states can be achieved with limited resources, and represent a step towards machine-learning-based applications in quantum information processing.
Freitas, B.L.; Skidmore, J.A.
1999-06-01
A substrate is used to fabricate a low-cost laser diode array. A substrate is machined from an electrically insulative material that is thermally conductive, or two substrates can be bonded together in which the top substrate is electrically as well as thermally conductive. The substrate thickness is slightly longer than the cavity length, and the width of the groove is wide enough to contain a bar and spring (which secures the laser bar firmly along one face of the groove). The spring also provides electrical continuity from the backside of the bar to the adjacent metalization layer on the laser bar substrate. Arrays containing one or more bars can be formed by creating many grooves at various spacings. Along the groove, many bars can be adjoined at the edges to provide parallel electrical conduction. This architecture allows precise and predictable registration of an array of laser bars to a self-aligned microlens array at low cost. 19 figs.
Array Technology for Terahertz Imaging
NASA Technical Reports Server (NTRS)
Reck, Theodore; Siles, Jose; Jung, Cecile; Gill, John; Lee, Choonsup; Chattopadhyay, Goutam; Mehdi, Imran; Cooper, Ken
2012-01-01
Heterodyne terahertz (0.3 - 3THz) imaging systems are currently limited to single or a low number of pixels. Drastic improvements in imaging sensitivity and speed can be achieved by replacing single pixel systems with an array of detectors. This paper presents an array topology that is being developed at the Jet Propulsion Laboratory based on the micromachining of silicon. This technique fabricates the array's package and waveguide components by plasma etching of silicon, resulting in devices with precision surpassing that of current metal machining techniques. Using silicon increases the versatility of the packaging, enabling a variety of orientations of circuitry within the device which increases circuit density and design options. The design of a two-pixel transceiver utilizing a stacked architecture is presented that achieves a pixel spacing of 10mm. By only allowing coupling from the top and bottom of the package the design can readily be arrayed in two dimensions with a spacing of 10mm x 18mm.
Halbach array motor/generators: A novel generalized electric machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merritt, B.T.; Post, R.F.; Dreifuerst, G.R.
1995-02-01
For many years Klaus Halbach has been investigating novel designs for permanent magnet arrays, using advanced analytical approaches and employing a keen insight into such systems. One of his motivations for this research was to find more efficient means for the utilization of permanent magnets for use in particle accelerators and in the control of particle beams. As a result of his pioneering work, high power free-electron laser systems, such as the ones built at the Lawrence Livermore Laboratory, became feasible, and his arrays have been incorporated into other particle-focusing systems of various types. This paper reports another, quite different,more » application of Klaus` work, in the design of high power, high efficiency, electric generators and motors. When tested, these motor/generator systems display some rather remarkable properties. Their success derives from the special properties which these arrays, which the authors choose to call {open_quotes}Halbach arrays,{close_quotes} possess.« less
Freitas, Barry L.; Skidmore, Jay A.
1999-01-01
A substrate is used to fabricate a low-cost laser diode array. A substrate is machined from an electrically insulative material that is thermally conductive, or two substrates can be bonded together in which the top substrate is electrically as well as thermally conductive. The substrate thickness is slightly longer than the cavity length, and the width of the groove is wide enough to contain a bar and spring (which secures the laser bar firmly along one face of the groove). The spring also provides electrical continuity from the backside of the bar to the adjacent metalization layer on the laser bar substrate. Arrays containing one or more bars can be formed by creating many grooves at various spacings. Along the groove, many bars can be adjoined at the edges to provide parallel electrical conduction. This architecture allows precise and predictable registration of an array of laser bars to a self-aligned microlens array at low cost.
NASA Technical Reports Server (NTRS)
Muellerschoen, R. J.
1988-01-01
A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).
Very high frequency (beyond 100 MHz) PZT kerfless linear arrays.
Wu, Da-Wei; Zhou, Qifa; Geng, Xuecang; Liu, Chang-Geng; Djuth, Frank; Shung, K Kirk
2009-10-01
This paper presents the design, fabrication, and measurements of very high frequency kerfless linear arrays prepared from PZT film and PZT bulk material. A 12-microm PZT thick film fabricated from PZT-5H powder/solution composite and a piece of 15-microm PZT-5H sheet were used to fabricate 32-element kerfless high-frequency linear arrays with photolithography. The PZT thick film was prepared by spin-coating of PZT sol-gel composite solution. The thin PZT-5H sheet sample was prepared by lapping a PZT-5H ceramic with a precision lapping machine. The measured results of the 2 arrays were compared. The PZT film array had a center frequency of 120 MHz, a bandwidth of 60% with a parylene matching layer, and an insertion loss of 41 dB. The PZT ceramic sheet array was found to have a center frequency of 128 MHz with a poorer bandwidth (40% with a parylene matching layer) but a better sensitivity (28 dB insertion loss).
Very High Frequency (Beyond 100 MHz) PZT Kerfless Linear Arrays
Wu, Da-Wei; Zhou, Qifa; Geng, Xuecang; Liu, Chang-Geng; Djuth, Frank; Shung, K. Kirk
2010-01-01
This paper presents the design, fabrication, and measurements of very high frequency kerfless linear arrays prepared from PZT film and PZT bulk material. A 12-µm PZT thick film fabricated from PZT-5H powder/solution composite and a piece of 15-µm PZT-5H sheet were used to fabricate 32-element kerfless high-frequency linear arrays with photolithography. The PZT thick film was prepared by spin-coating of PZT sol-gel composite solution. The thin PZT-5H sheet sample was prepared by lapping a PZT-5H ceramic with a precision lapping machine. The measured results of the 2 arrays were compared. The PZT film array had a center frequency of 120 MHz, a bandwidth of 60% with a parylene matching layer, and an insertion loss of 41 dB. The PZT ceramic sheet array was found to have a center frequency of 128 MHz with a poorer bandwidth (40% with a parylene matching layer) but a better sensitivity (28 dB insertion loss). PMID:19942516
NASA Astrophysics Data System (ADS)
Chen, Jinzhong; He, Renyang; Kang, Xiaowei; Yang, Xuyun
2015-10-01
The non-destructive testing of small-sized (M12-M20) stainless steel bolts in servicing is always a technical problem. This article focuses on the simulation and experimental research of stainless steel bolts with an artificial defect reflector using ultrasonic phased array inspection. Based on the observation of the sound field distribution of stainless steel bolts in ultrasonic phased array as well as simulation modelling and analysis of the phased array probes' detection effects with various defect sizes, different artificial defect reflectors of M16 stainless steel bolts are machined in reference to the simulation results. Next, those bolts are tested using a 10-wafer phased array probe with 5 MHz. The test results finally prove that ultrasonic phased array can detect 1-mm cracks in diameter with different depths of M16 stainless steel bolts and a metal loss of Φ1 mm of through-hole bolts, which provides technical support for future non-destructive testing of stainless steel bolts in servicing.
1988-05-01
Shearing Machines WR/MMI DG 3446 Forging Machinery and Hammers WR/MMI DG 3447 Wire and Metal Ribbon Forming Machines WR/MMI DG 3448 Riveting Machines ...R/MN1I DG 3449 Miscellaneous Secondary Metal Forming & Cutting WR/MMI DG Machinery 3450 Machine Tools, Portable WR/MMI DG 3455 Cutting Tools for...Secondary Metalworking Machinery WR/MMI DG WR 3465 Production Jigs, Fixtures and Templates WR/MMI DG WR 3470 Machine Shop Sets, Kits, and Outfits WR/MMI DG
NASA Astrophysics Data System (ADS)
Lizotte, Todd E.; Ohar, Orest
2004-02-01
Illuminators used in machine vision applications typically produce non-uniform illumination onto the targeted surface being observed, causing a variety of problems with machine vision alignment or measurement. In most circumstances the light source is broad spectrum, leading to further problems with image quality when viewed through a CCD camera. Configured with a simple light bulb and a mirrored reflector and/or frosted glass plates, these general illuminators are appropriate for only macro applications. Over the last 5 years newer illuminators have hit the market including circular or rectangular arrays of high intensity light emitting diodes. These diode arrays are used to create monochromatic flood illumination of a surface that is to be inspected. The problem with these illumination techniques is that most of the light does not illuminate the desired areas, but broadly spreads across the surface, or when integrated with diffuser elements, tend to create similar shadowing effects to the broad spectrum light sources. In many cases a user will try to increase the performance of these illuminators by adding several of these assemblies together, increasing the intensity or by moving the illumination source closer or farther from the surface being inspected. In this case these non-uniform techniques can lead to machine vision errors, where the computer machine vision may read false information, such as interpreting non-uniform lighting or shadowing effects as defects. This paper will cover a technique involving the use of holographic / diffractive hybrid optical elements that are integrated into standard and customized light sources used in the machine vision industry. The bulk of the paper will describe the function and fabrication of the holographic/diffractive optics and how they can be tailored to improve illuminator design. Further information will be provided a specific design and examples of it in operation will be disclosed.
Chip morphology as a performance predictor during high speed end milling of soda lime glass
NASA Astrophysics Data System (ADS)
Bagum, M. N.; Konneh, M.; Abdullah, K. A.; Ali, M. Y.
2018-01-01
Soda lime glass has application in DNA arrays and lab on chip manufacturing. Although investigation revealed that machining of such brittle material is possible using ductile mode under controlled cutting parameters and tool geometry, it remains a challenging task. Furthermore, ability of ductile machining is usually assed through machined surface texture examination. Soda lime glass is a strain rate and temperature sensitive material. Hence, influence on attainment of ductile surface due to adiabatic heat generated during high speed end milling using uncoated tungsten carbide tool is investigated in this research. Experimental runs were designed using central composite design (CCD), taking spindle speed, feed rate and depth of cut as input variable and tool-chip contact point temperature (Ttc) and the surface roughness (Rt) as responses. Along with machined surface texture, Rt and chip morphology was examined to assess machinability of soda lime glass. The relation between Ttc and chip morphology was examined. Investigation showed that around glass transition temperature (Tg) ductile chip produced and subsequently clean and ductile final machined surface produced.
NASA Technical Reports Server (NTRS)
Bloch, J. T.; Hanger, R. T.; Nichols, F. W.
1979-01-01
Modified 70 mm movie film editor automatically attaches solar cells to flexible film substrate. Machine can rapidly and inexpensively assemble cells for solar panels at rate of 250 cells per minute. Further development is expected to boost production rate to 1000 cells per minute.
NASA Astrophysics Data System (ADS)
Sereda, T. G.; Kostarev, S. N.
2018-03-01
Theoretical bases of linkage of material streams of the machine-building enterprise and the automated system of decision-making are developed. The process of machine-building manufacture is submitted by the existential system. The equation of preservation of movement is based on calculation of volume of manufacture. The basis of resource variables includes capacities and operators of the equipment. Indignations such as a defect and failure are investigated in the existential basis. The equation of a stream of details on a manufacturing route is made. The received analytical expression expresses a condition of a stream of movement of details in view of influence of work of the equipment and traumatism of the personnel.
Dynamic provisioning of local and remote compute resources with OpenStack
NASA Astrophysics Data System (ADS)
Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.
2015-12-01
Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.
NASA Astrophysics Data System (ADS)
Miller, Timothy M.; Abrahams, John H.; Allen, Christine A.
2006-04-01
We report a fabrication process for deep etching silicon to different depths with a single masking layer, using standard masking and exposure techniques. Using this technique, we have incorporated a deep notch in the support walls of a transition-edge-sensor (TES) bolometer array during the detector back-etch, while simultaneously creating a cavity behind the detector. The notches serve to receive the support beams of a separate component, the Backshort-Under-Grid (BUG), an array of adjustable height quarter-wave backshorts that fill the cavities behind each pixel in the detector array. The backshort spacing, set prior to securing to the detector array, can be controlled from 25 to 300 μm by adjusting only a few process steps. In addition to backshort spacing, the interlocking beams and notches provide positioning and structural support for the ˜1 mm pitch, 8×8 array. This process is being incorporated into developing a TES bolometer array with an adjustable backshort for use in far-infrared astronomy. The masking technique and machining process used to fabricate the interlocking walls will be discussed.
A New Approach to Geoengineering: Manna From Heaven
NASA Astrophysics Data System (ADS)
Ellery, Alex
2015-04-01
Geo-engineering, although controversial, has become an emerging factor in coping with climate change. Although most are terrestrial-based technologies, I focus on a space-based approach implemented through a solar shield system. I present several new elements that essentially render the high-cost criticism moot. Of special relevance are two seemingly unrelated technologies - the Resource Prospector Mission (RPM) to the Moon in 2018 that shall implement a technology demonstration of simple material resource extraction from lunar regolith, and the emergence of multi-material 3D printing technology that promises unprecedented robotic manufacturing capabilities. My research group has begun theoretical and experimentation work in developing the concept of a 3D printed electric motor system from lunar-type resources. The electric motor underlies every universal mechanical machine. Together with 3D printed electronics, I submit that this would enable self-replicating machines to be realised. A detailed exposition on how this may be achieved will be outlined. Such self-replicating machines could construct the spacecraft required to implement a solar shield and solar power satellites in large numbers from lunar resources with the same underlying technologies at extremely low cost.
Quantum machine learning for quantum anomaly detection
NASA Astrophysics Data System (ADS)
Liu, Nana; Rebentrost, Patrick
2018-04-01
Anomaly detection is used for identifying data that deviate from "normal" data patterns. Its usage on classical data finds diverse applications in many important areas such as finance, fraud detection, medical diagnoses, data cleaning, and surveillance. With the advent of quantum technologies, anomaly detection of quantum data, in the form of quantum states, may become an important component of quantum applications. Machine-learning algorithms are playing pivotal roles in anomaly detection using classical data. Two widely used algorithms are the kernel principal component analysis and the one-class support vector machine. We find corresponding quantum algorithms to detect anomalies in quantum states. We show that these two quantum algorithms can be performed using resources that are logarithmic in the dimensionality of quantum states. For pure quantum states, these resources can also be logarithmic in the number of quantum states used for training the machine-learning algorithm. This makes these algorithms potentially applicable to big quantum data applications.
STS-109 Mission Highlights Resource Tape
NASA Astrophysics Data System (ADS)
2002-05-01
This video, Part 2 of 4, shows the activities of the STS-109 crew (Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, James Newman, Richard Linnehan, Michael Massimino, Mission Specialists) during flight days 4 and 5. The activities from other flights days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 3 of 4 (internal ID 2002139476), and 'STS-109 Mission Highlights Resource Tape' Part 4 of 4 (internal ID 2002137577). The primary activities during these days were EVAs (extravehicular activities) to replace two solar arrays on the HST (Hubble Space Telescope). Footage from flight day 4 records an EVA by Grunsfeld and Linnehan, including their exit from Columbia's payload bay airlock, their stowing of the old HST starboard rigid array on the rigid array carrier in Columbia's payload bay, their attachment of the new array on HST, the installation of a new starboard diode box, and the unfolding of the new array. The pistol grip space tool used to fasten the old array in its new location is shown in use. The video also includes several shots of the HST with Earth in the background. On flight day 5 Newman and Massimino conduct an EVA to change the port side array and diode box on HST. This EVA is very similar to the one on flight day 4, and is covered similarly in the video. A hand operated ratchet is shown in use. In addition to a repeat of the previous tasks, the astronauts change HST's reaction wheel assembly, and because they are ahead of schedule, install installation and lubricate an instrument door on the telescope. The Earth views include a view of Egypt and Israel, with the Nile River, Red Sea, and Mediterranean Sea.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
Rosen, M. A.; Sampson, J. B.; Jackson, E. V.; Koka, R.; Chima, A. M.; Ogbuagu, O. U.; Marx, M. K.; Koroma, M.; Lee, B. H.
2014-01-01
Background Anaesthesia care in developed countries involves sophisticated technology and experienced providers. However, advanced machines may be inoperable or fail frequently when placed into the austere medical environment of a developing country. Failure mode and effects analysis (FMEA) is a method for engaging local staff in identifying real or potential breakdowns in processes or work systems and to develop strategies to mitigate risks. Methods Nurse anaesthetists from the two tertiary care hospitals in Freetown, Sierra Leone, participated in three sessions moderated by a human factors specialist and an anaesthesiologist. Sessions were audio recorded, and group discussion graphically mapped by the session facilitator for analysis and commentary. These sessions sought to identify potential barriers to implementing an anaesthesia machine designed for austere medical environments—the universal anaesthesia machine (UAM)—and also engaging local nurse anaesthetists in identifying potential solutions to these barriers. Results Participating Sierra Leonean clinicians identified five main categories of failure modes (resource availability, environmental issues, staff knowledge and attitudes, and workload and staffing issues) and four categories of mitigation strategies (resource management plans, engaging and educating stakeholders, peer support for new machine use, and collectively advocating for needed resources). Conclusions We identified factors that may limit the impact of a UAM and devised likely effective strategies for mitigating those risks. PMID:24833727
ERIC Educational Resources Information Center
Texas State Technical Coll. System, Waco.
This package consists of course syllabi, an instructor's handbook, and a student laboratory manual for a 1-year vocational training program to prepare students for entry-level employment as laser machining technicians. The program was developed through a modification of the DACUM (Developing a Curriculum) technique. The course syllabi volume…
Finite element computation on nearest neighbor connected machines
NASA Technical Reports Server (NTRS)
Mcaulay, A. D.
1984-01-01
Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.
Protein machines and self assembly in muscle organization
NASA Technical Reports Server (NTRS)
Barral, J. M.; Epstein, H. F.
1999-01-01
The remarkable order of striated muscle is the result of a complex series of protein interactions at different levels of organization. Within muscle, the thick filament and its major protein myosin are classical examples of functioning protein machines. Our understanding of the structure and assembly of thick filaments and their organization into the regular arrays of the A-band has recently been enhanced by the application of biochemical, genetic, and structural approaches. Detailed studies of the thick filament backbone have shown that the myosins are organized into a tubular structure. Additional protein machines and specific myosin rod sequences have been identified that play significant roles in thick filament structure, assembly, and organization. These include intrinsic filament components, cross-linking molecules of the M-band and constituents of the membrane-cytoskeleton system. Muscle organization is directed by the multistep actions of protein machines that take advantage of well-established self-assembly relationships. Copyright 1999 John Wiley & Sons, Inc.
Micro-machined calorimetric biosensors
Doktycz, Mitchel J.; Britton, Jr., Charles L.; Smith, Stephen F.; Oden, Patrick I.; Bryan, William L.; Moore, James A.; Thundat, Thomas G.; Warmack, Robert J.
2002-01-01
A method and apparatus are provided for detecting and monitoring micro-volumetric enthalpic changes caused by molecular reactions. Micro-machining techniques are used to create very small thermally isolated masses incorporating temperature-sensitive circuitry. The thermally isolated masses are provided with a molecular layer or coating, and the temperature-sensitive circuitry provides an indication when the molecules of the coating are involved in an enthalpic reaction. The thermally isolated masses may be provided singly or in arrays and, in the latter case, the molecular coatings may differ to provide qualitative and/or quantitative assays of a substance.
Gambling revenues as a public administration issue: electronic gaming machines in Victoria.
Pickernell, David; Keast, Robyn; Brown, Kerry; Yousefpour, Nina; Miller, Chris
2013-12-01
Gambling activities and the revenues derived have been seen as a way to increase economic development in deprived areas. There are also, however, concerns about the effects of gambling in general and electronic gaming machines (EGMs) in particular, on the resources available to the localities in which they are situated. This paper focuses on the factors that determine the extent and spending of community benefit-related EGM-generated resources within Victoria, Australia, focusing in particular on the relationships between EGM activity and socio-economic and social capital indicators, and how this relates to the community benefit resources generated by gaming.
Harper, Jason C.; Carson, Bryan D.; Bachand, George D.; ...
2015-07-14
Despite significant progress in development of bioanalytical devices cost, complexity, access to reagents and lack of infrastructure have prevented use of these technologies in resource-limited regions. To provide a sustainable tool in the global effort to combat infectious diseases the diagnostic device must be low cost, simple to operate and read, robust, and have sensitivity and specificity comparable to laboratory analysis. Thus, in this mini-review we describe recent work using laser machined plastic laminates to produce diagnostic devices that are capable of a wide variety of bioanalytical measurements and show great promise towards future use in low-resource environments.
Big data challenges for large radio arrays
NASA Astrophysics Data System (ADS)
Jones, D. L.; Wagstaff, K.; Thompson, D. R.; D'Addario, L.; Navarro, R.; Mattmann, C.; Majid, W.; Lazio, J.; Preston, J.; Rebbapragada, U.
2012-03-01
Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields. The Jet Propulsion Laboratory is developing technologies to address big data issues, with an emphasis in three areas: 1) Lower-power digital processing architectures to make highvolume data generation operationally affordable, 2) Date-adaptive machine learning algorithms for real-time analysis (or "data triage") of large data volumes, and 3) Scalable data archive systems that allow efficient data mining and remote user code to run locally where the data are stored.
Toward Harnessing User Feedback For Machine Learning
2006-10-02
machine learning systems. If this resource-the users themselves-could somehow work hand-in-hand with machine learning systems, the accuracy of learning systems could be improved and the users? understanding and trust of the system could improve as well. We conducted a think-aloud study to see how willing users were to provide feedback and to understand what kinds of feedback users could give. Users were shown explanations of machine learning predictions and asked to provide feedback to improve the predictions. We found that users
Robust snow avalanche detection using machine learning on infrasonic array data
NASA Astrophysics Data System (ADS)
Thüring, Thomas; Schoch, Marcel; van Herwijnen, Alec; Schweizer, Jürg
2014-05-01
Snow avalanches may threaten people and infrastructure in mountain areas. Automated detection of avalanche activity would be highly desirable, in particular during times of poor visibility, to improve hazard assessment, but also to monitor the effectiveness of avalanche control by explosives. In the past, a variety of remote sensing techniques and instruments for the automated detection of avalanche activity have been reported, which are based on radio waves (radar), seismic signals (geophone), optical signals (imaging sensor) or infrasonic signals (microphone). Optical imagery enables to assess avalanche activity with very high spatial resolution, however it is strongly weather dependent. Radar and geophone-based detection typically provide robust avalanche detection for all weather conditions, but are very limited in the size of the monitoring area. On the other hand, due to the long propagation distance of infrasound through air, the monitoring area of infrasonic sensors can cover a large territory using a single sensor (or an array). In addition, they are by far more cost effective than radars or optical imaging systems. Unfortunately, the reliability of infrasonic sensor systems has so far been rather low due to the strong variation of ambient noise (e.g. wind) causing a high false alarm rate. We analyzed the data collected by a low-cost infrasonic array system consisting of four sensors for the automated detection of avalanche activity at Lavin in the eastern Swiss Alps. A comparably large array aperture (~350m) allows highly accurate time delay estimations of signals which arrive at different times at the sensors, enabling precise source localization. An array of four sensors is sufficient for the time resolved source localization of signals in full 3D space, which is an excellent method to anticipate true avalanche activity. Robust avalanche detection is then achieved by using machine learning methods such as support vector machines. The system is initially trained by using characteristic data features from known avalanche and non-avalanche events. Data features are obtained from output signals of the source localization algorithm or from Fourier or time domain processing and support the learning phase of the system. A significantly improved detection rate as well as a reduction of the false alarm rate was achieved compared to previous approaches.
Leinders, S M; Westerveld, W J; Pozo, J; van Neer, P L M J; Snyder, B; O'Brien, P; Urbach, H P; de Jong, N; Verweij, M D
2015-09-22
With the increasing use of ultrasonography, especially in medical imaging, novel fabrication techniques together with novel sensor designs are needed to meet the requirements for future applications like three-dimensional intercardiac and intravascular imaging. These applications require arrays of many small elements to selectively record the sound waves coming from a certain direction. Here we present proof of concept of an optical micro-machined ultrasound sensor (OMUS) fabricated with a semi-industrial CMOS fabrication line. The sensor is based on integrated photonics, which allows for elements with small spatial footprint. We demonstrate that the first prototype is already capable of detecting pressures of 0.4 Pa, which matches the performance of the state of the art piezo-electric transducers while having a 65 times smaller spatial footprint. The sensor is compatible with MRI due to the lack of electronical wiring. Another important benefit of the use of integrated photonics is the easy interrogation of an array of elements. Hence, in future designs only two optical fibers are needed to interrogate an entire array, which minimizes the amount of connections of smart catheters. The demonstrated OMUS has potential applications in medical ultrasound imaging, non destructive testing as well as in flow sensing.
Hadamard spectrometer for passive LWIR standoff surveillance
NASA Astrophysics Data System (ADS)
Kruzelecky, Roman V.; Wong, Brian; Zou, Jing; Mohammad, Najeeb; Jamroz, Wes; Soltani, Mohammed; Chaker, Mohamed; Haddad, Emile; Laou, Philips; Paradis, Suzanne
2007-06-01
Based on the principle of the Integrated Optical Spectrometer (IOSPEC), a waveguide-based, longwave infrared (LWIR) dispersive spectrometer with multiple input slits for Hadamard spectroscopy was designed and built intended for passive standoff chemical agent detection in 8 to 12μm spectral range. This prototype unit equips with a three-inch input telescope providing a field-of-view of 1.2 degrees, a 16-microslit array (each slit 60 μm by 1.8 mm) module for Hadamard binary coding, a 2-mm core ZnS/ZnSe/ZnS slab waveguide with a 2 by 2 mm2 optical input and micro-machined integrated optical output condensor, a Si micro-machined blazing grating, a customized 128-pixel LWIR mercury-cadmium-telluride (MCT) LN2 cooled detector array, proprietary signal processing technique, software and electronics. According to the current configuration, it was estimated that the total system weight to be ~4 kg, spectral resolution <4cm -1 and Noise Equivalent Spectral Radiance (NESR) <10 -8 Wcm -2 sr -1cm -1 in 8 to 12 μm. System design and preliminary test results of some components will be presented. Upon the arrival of the MCT detector array, the prototype unit will be further tested and its performance validated in fall of 2007.
Ultrasonic seam welding on thin silicon solar cells
NASA Technical Reports Server (NTRS)
Stofel, E. J.
1982-01-01
The ultrathin silicon solar cell has progressed to where it is a serious candidate for future light weight or radiation tolerant spacecraft. The ultrasonic method of producing welds was found to be satisfactory. These ultrathin cells could be handled without breakage in a semiautomated welding machine. This is a prototype of a machine capable of production rates sufficiently large to support spacecraft array assembly needs. For comparative purposes, this project also welded a variety of cells with thicknesses up to 0.23 mm as well as the 0.07 mm ultrathin cells. There was no electrical degradation in any cells. The mechanical pull strength of welds on the thick cells was excellent when using a large welding force. The mechanical strength of welds on thin cells was less since only a small welding force could be used without cracking these cells. Even so, the strength of welds on thin cells appears adequate for array application. The ability of such welds to survive multiyear, near Earth orbit thermal cycles needs to be demonstrated.
1998-09-16
A team of engineers at Marshall Space Flight Center (MSFC) has designed, fabricated, and tested the first solar thermal engine, a non-chemical rocket that produces lower thrust but has better thrust efficiency than the chemical combustion engines. This segmented array of mirrors is the solar concentrator test stand at MSFC for firing the thermal propulsion engines. The 144 mirrors are combined to form an 18-foot diameter array concentrator. The mirror segments are aluminum hexagons that have the reflective surface cut into it by a diamond turning machine, which is developed by MSFC Space Optics Manufacturing Technology Center.
Advances in diagnostic ultrasonography.
Reef, V B
1991-08-01
A wide variety of ultrasonographic equipment currently is available for use in equine practice, but no one machine is optimal for every type of imaging. Image quality is the most important factor in equipment selection once the needs of the practitioner are ascertained. The transducer frequencies available, transducer footprints, depth of field displayed, frame rate, gray scale, simultaneous electrocardiography, Doppler, and functions to modify the image are all important considerations. The ability to make measurements off of videocassette recorder playback and future upgradability should be evaluated. Linear array and sector technology are the backbone of equine ultrasonography today. Linear array technology is most useful for a high-volume broodmare practice, whereas sector technology is ideal for a more general equine practice. The curved or convex linear scanner has more applications than the standard linear array and is equipped with the linear array rectal probe, which provides the equine practitioner with a more versatile unit for equine ultrasonographic evaluations. The annular array and phased array systems have improved image quality, but each has its own limitations. The new sector scanners still provide the most versatile affordable equipment for equine general practice.
Nanometric edge profile measurement of cutting tools on a diamond turning machine
NASA Astrophysics Data System (ADS)
Asai, Takemi; Arai, Yoshikazu; Cui, Yuguo; Gao, Wei
2008-10-01
Single crystal diamond tools are used for fabrication of precision parts [1-5]. Although there are many types of tools that are supplied, the tools with round nose are popular for machining very smooth surfaces. Tools with small nose radii, small wedge angles and included angles are also being utilized for fabrication of micro structured surfaces such as microlens arrays [6], diffractive optical elements and so on. In ultra precision machining, tools are very important as a part of the machining equipment. The roughness or profile of machined surface may become out of desired tolerance. It is thus necessary to know the state of the tool edge accurately. To meet these requirements, an atomic force microscope (AFM) for measuring the 3D edge profiles of tools having nanometer-scale cutting edge radii with high resolution has been developed [7-8]. Although the AFM probe unit is combined with an optical sensor for aligning the measurement probe with the tools edge top to be measured in short time in this system, this time only the AFM probe unit was used. During the measurement time, that was attached onto the ultra precision turning machine to confirm the possibility of profile measurement system.
Development of High Efficiency (14%) Solar Cell Array Module
NASA Technical Reports Server (NTRS)
Iles, P. A.; Khemthong, S.; Olah, S.; Sampson, W. J.; Ling, K. S.
1979-01-01
High efficiency solar cells required for the low cost modules was developed. The production tooling for the manufacture of the cells and modules was designed. The tooling consisted of: (1) back contact soldering machine; (2) vacuum pickup; (3) antireflective coating tooling; and (4) test fixture.
NASA Astrophysics Data System (ADS)
Dasgupta, S.; Mukherjee, S.
2016-09-01
One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and optimization of machining parameters based on Taguchi design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. Taguchi's orthogonal array analysis revealed the optimal combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.
Mathematical modelling and numerical simulation of forces in milling process
NASA Astrophysics Data System (ADS)
Turai, Bhanu Murthy; Satish, Cherukuvada; Prakash Marimuthu, K.
2018-04-01
Machining of the material by milling induces forces, which act on the work piece material, tool and which in turn act on the machining tool. The forces involved in milling process can be quantified, mathematical models help to predict these forces. A lot of research has been carried out in this area in the past few decades. The current research aims at developing a mathematical model to predict forces at different levels which arise machining of Aluminium6061 alloy. Finite element analysis was used to develop a FE model to predict the cutting forces. Simulation was done for varying cutting conditions. Different experiments was designed using Taguchi method. A L9 orthogonal array was designed and the output was measure for the different experiments. The same was used to develop the mathematical model.
NASA Technical Reports Server (NTRS)
Burke, Gary R.; Taft, Stephanie
2004-01-01
State machines are commonly used to control sequential logic in FPGAs and ASKS. An errant state machine can cause considerable damage to the device it is controlling. For example in space applications, the FPGA might be controlling Pyros, which when fired at the wrong time will cause a mission failure. Even a well designed state machine can be subject to random errors us a result of SEUs from the radiation environment in space. There are various ways to encode the states of a state machine, and the type of encoding makes a large difference in the susceptibility of the state machine to radiation. In this paper we compare 4 methods of state machine encoding and find which method gives the best fault tolerance, as well as determining the resources needed for each method.
The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunderam, Vaidy S.
2012-03-20
The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less
Flexible Organic Electronics for Use in Neural Sensing
Bink, Hank; Lai, Yuming; Saudari, Sangameshwar R.; Helfer, Brian; Viventi, Jonathan; Van der Spiegel, Jan; Litt, Brian; Kagan, Cherie
2016-01-01
Recent research in brain-machine interfaces and devices to treat neurological disease indicate that important network activity exists at temporal and spatial scales beyond the resolution of existing implantable devices. High density, active electrode arrays hold great promise in enabling high-resolution interface with the brain to access and influence this network activity. Integrating flexible electronic devices directly at the neural interface can enable thousands of multiplexed electrodes to be connected using many fewer wires. Active electrode arrays have been demonstrated using flexible, inorganic silicon transistors. However, these approaches may be limited in their ability to be cost-effectively scaled to large array sizes (8×8 cm). Here we show amplifiers built using flexible organic transistors with sufficient performance for neural signal recording. We also demonstrate a pathway for a fully integrated, amplified and multiplexed electrode array built from these devices. PMID:22255558
Viventi, Jonathan; Kim, Dae-Hyeong; Vigeland, Leif; Frechette, Eric S; Blanco, Justin A; Kim, Yun-Soung; Avrin, Andrew E; Tiruvadi, Vineet R; Hwang, Suk-Won; Vanleer, Ann C; Wulsin, Drausin F; Davis, Kathryn; Gelber, Casey E; Palmer, Larry; Van der Spiegel, Jan; Wu, Jian; Xiao, Jianliang; Huang, Yonggang; Contreras, Diego; Rogers, John A; Litt, Brian
2011-11-13
Arrays of electrodes for recording and stimulating the brain are used throughout clinical medicine and basic neuroscience research, yet are unable to sample large areas of the brain while maintaining high spatial resolution because of the need to individually wire each passive sensor at the electrode-tissue interface. To overcome this constraint, we developed new devices that integrate ultrathin and flexible silicon nanomembrane transistors into the electrode array, enabling new dense arrays of thousands of amplified and multiplexed sensors that are connected using fewer wires. We used this system to record spatial properties of cat brain activity in vivo, including sleep spindles, single-trial visual evoked responses and electrographic seizures. We found that seizures may manifest as recurrent spiral waves that propagate in the neocortex. The developments reported here herald a new generation of diagnostic and therapeutic brain-machine interface devices.
Electrostatically clean solar array
NASA Technical Reports Server (NTRS)
Stern, Theodore Garry (Inventor); Krumweide, Duane Eric (Inventor)
2004-01-01
Provided are methods of manufacturing an electrostatically clean solar array panel and the products resulting from the practice of these methods. The preferred method uses an array of solar cells, each with a coverglass where the method includes machining apertures into a flat, electrically conductive sheet so that each aperture is aligned with and undersized with respect to its matched coverglass sheet and thereby fashion a front side shield with apertures (FSA). The undersized portion about each aperture of the bottom side of the FSA shield is bonded to the topside portions nearest the edges of each aperture's matched coverglass. Edge clips are attached to the front side aperture shield edges with the edge clips electrically and mechanically connecting the tops of the coverglasses to the solar panel substrate. The FSA shield, edge clips and substrate edges are bonded so as to produce a conductively grounded electrostatically clean solar array panel.
Circuit for high resolution decoding of multi-anode microchannel array detectors
NASA Technical Reports Server (NTRS)
Kasle, David B. (Inventor)
1995-01-01
A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.
Overview of Proposed ISRU Technology Development
NASA Technical Reports Server (NTRS)
Linne, Diane; Sanders, Jerry; Starr, Stan; Suzuki, Nantel; O'Malley, Terry
2016-01-01
ISRU involves any hardware or operation that harnesses and utilizes in-situ resources (natural and discarded) to create products and services for robotic and human exploration: Assessment of physical, mineral chemical, and volatile water resources, terrain, geology, and environment (orbital and local). Production of replacement parts, complex products, machines, and integrated systems from feedstock derived from one or more processed resources. Civil engineering, infrastructure emplacement, and structure construction using materials produced from in situ resources. Radiation shields, landing pads, roads, berms, habitats, etc. Generation and storage of electrical, thermal, and chemical energy with in situ derived materials. Solar arrays, thermal wadis, chemical batteries, etc. ISRU is a disruptive capability: Enables more affordable exploration than todays paradigm. Allows more sustainable architectures to be developed. Understand the ripple effect in the other Exploration Elements: MAV: propellant selection, higher rendezvous altitude (higher DV capable with ISRU propellants). EDL: significantly reduces required landed mass. Life Support: reduce amount of ECLSS closure, reduce trash mass carried through propulsive maneuvers. Power: ISRU drives electrical requirements, reactant and regeneration for fuel cells for landers, rovers, and habitat backup. Every Exploration Element except ISRU has some flight heritage (power, propulsion, habitats, landers, life support, etc.) ISRU will require a flight demonstration mission on Mars before it will be included in the critical path. Mission needs to be concluded at least 10 years before first human landed mission to ensure lessons learned can be incorporated into final design. ISRU Formulation team has generated a (still incomplete) list of over 75 technical questions on more than 40 components and subsystems that need to be answered before the right ISRU system will be ready for this flight demo.
Implementation of an agile maintenance mechanic assignment methodology
NASA Astrophysics Data System (ADS)
Jimenez, Jesus A.; Quintana, Rolando
2000-10-01
The objective of this research was to develop a decision support system (DSS) to study the impact of introducing new equipment into a medical apparel plant from a maintenance organizational structure perspective. This system will enable the company to determine if their capacity is sufficient to meet current maintenance challenges. The DSS contains two database sets that describe equipment and maintenance resource profiles. The equipment profile specifies data such as mean time to failures, mean time to repairs, and minimum mechanic skill level required to fix each machine group. Similarly, maintenance-resource profile reports information about the mechanic staff, such as number and type of certifications received, education level, and experience. The DSS will then use this information to minimize machine downtime by assigning the highest skilled mechanics to machines with higher complexity and product value. A modified version of the simplex method, the transportation problem, was used to perform the optimization. The DSS was built using the Visual Basic for Applications (VBA) language contained in the Microsoft Excel environment. A case study was developed from current existing data. The analysis consisted of forty-two machine groups and six mechanic categories with ten skill levels. Results showed that only 56% of the mechanic workforce was utilized. Thus, the company had available resources for meeting future maintenance requirements.
SU-F-P-49: Comparison of Mapcheck 2 Commission for Photon and Electron Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, J; Yang, C; Morris, B
2016-06-15
Purpose: We will investigate the performance variation of the MapCheck2 detector array with different array calibration and dose calibration pairs from different radiation therapy machine. Methods: A MapCheck2 detector array was calibrated on 3 Elekta accelerators with different energy of photon (6 MV, 10 MV, 15 MV and 18 MV) and electron (6 MeV, 9 MeV, 12 MeV, 15 MeV, 18 MeV and 20 MeV) beams. Dose calibration was conducted by referring a water phantom measurement following TG-51 protocol and commission data for each accelerator. A 10 cm × 10 cm beam was measured. This measured map was morphed bymore » applying different calibration pairs. Then the difference was quantified by comparing the doses and similarity using gamma analysis of criteria (0.5 %, 0 mm). Profile variation was evaluated on a same dataset with different calibration pairs. The passing rate of an IMRT QA planar dose was calculated by using 3 mm and 3% criteria and compared with respect to each calibration pairs. Results: In this study, a dose variation up to 0.67% for matched photons and 1.0% for electron beams is observed. Differences of flatness and symmetry can be as high as 1% and 0.7% respectively. Gamma analysis shows a passing rate ranging from 34% to 85% for the standard 10 × 10 cm field. Conclusion: Our work demonstrated that a customized array calibration and dose calibration for each machine is preferred to fulfill a high standard patient QA task.« less
Challenges in the Verification of Reinforcement Learning Algorithms
NASA Technical Reports Server (NTRS)
Van Wesel, Perry; Goodloe, Alwyn E.
2017-01-01
Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.
Array microscopy technology and its application to digital detection of Mycobacterium tuberculosis
NASA Astrophysics Data System (ADS)
McCall, Brian P.
Tuberculosis causes more deaths worldwide than any other curable infectious disease. This is the case despite tuberculosis appearing to be on the verge of eradication midway through the last century. Efforts at reversing the spread of tuberculosis have intensified since the early 1990s. Since then, microscopy has been the primary frontline diagnostic. In this dissertation, advances in clinical microscopy towards array microscopy for digital detection of Mycobacterium tuberculosis are presented. Digital array microscopy separates the tasks of microscope operation and pathogen detection and will reduce the specialization needed in order to operate the microscope. Distributing the work and reducing specialization will allow this technology to be deployed at the point of care, taking the front-line diagnostic for tuberculosis from the microscopy center to the community health center. By improving access to microscopy centers, hundreds of thousands of lives can be saved. For this dissertation, a lens was designed that can be manufactured as 4x6 array of microscopes. This lens design is diffraction limited, having less than 0.071 waves of aberration (root mean square) over the entire field of view. A total area imaged onto a full-frame digital image sensor is expected to be 3.94 mm2, which according to tuberculosis microscopy guidelines is more than sufficient for a sensitive diagnosis. The design is tolerant to single point diamond turning manufacturing errors, as found by tolerance analysis and by fabricating a prototype. Diamond micro-milling, a fabrication technique for lens array molds, was applied to plastic plano-concave and plano-convex lens arrays, and found to produce high quality optical surfaces. The micro-milling technique did not prove robust enough to produce bi-convex and meniscus lens arrays in a variety of lens shapes, however, and it required lengthy fabrication times. In order to rapidly prototype new lenses, a new diamond machining technique was developed called 4-axis single point diamond machining. This technique is 2-10x faster than micro-milling, depending on how advanced the micro-milling equipment is. With array microscope fabrication still in development, a single prototype of the lens designed for an array microscope was fabricated using single point diamond turning. The prototype microscope objective was validated in a pre-clinical trial. The prototype was compared with a standard clinical microscope objective in diagnostic tests. High concordance, a Fleiss's kappa of 0.88, was found between diagnoses made using the prototype and standard microscope objectives and a reference test. With the lens designed and validated and an advanced fabrication process developed, array microscopy technology is advanced to the point where it is feasible to rapidly prototype an array microscope for detection of tuberculosis and translate array microscope from an innovative concept to a device that can save lives.
Fault detection in rotating machines with beamforming: Spatial visualization of diagnosis features
NASA Astrophysics Data System (ADS)
Cardenas Cabada, E.; Leclere, Q.; Antoni, J.; Hamzaoui, N.
2017-12-01
Rotating machines diagnosis is conventionally related to vibration analysis. Sensors are usually placed on the machine to gather information about its components. The recorded signals are then processed through a fault detection algorithm allowing the identification of the failing part. This paper proposes an acoustic-based diagnosis method. A microphone array is used to record the acoustic field radiated by the machine. The main advantage over vibration-based diagnosis is that the contact between the sensors and the machine is no longer required. Moreover, the application of acoustic imaging makes possible the identification of the sources of acoustic radiation on the machine surface. The display of information is then spatially continuous while the accelerometers only give it discrete. Beamforming provides the time-varying signals radiated by the machine as a function of space. Any fault detection tool can be applied to the beamforming output. Spectral kurtosis, which highlights the impulsiveness of a signal as function of frequency, is used in this study. The combination of spectral kurtosis with acoustic imaging makes possible the mapping of the impulsiveness as a function of space and frequency. The efficiency of this approach lays on the source separation in the spatial and frequency domains. These mappings make possible the localization of such impulsive sources. The faulty components of the machine have an impulsive behavior and thus will be highlighted on the mappings. The study presents experimental validations of the method on rotating machines.
Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y
2018-05-01
A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.
NASA Astrophysics Data System (ADS)
Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn
2016-05-01
Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.
NASA Astrophysics Data System (ADS)
Stagni, F.; McNab, A.; Luzzi, C.; Krzemien, W.; Consortium, DIRAC
2017-10-01
In the last few years, new types of computing models, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are in the form of opportunistic ones. Most but not all of these new infrastructures are based on virtualization techniques. In addition, some of them, present opportunities for multi-processor computing slots to the users. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to provide the transparent, uniform interface has become essential. The transparent access to the underlying resources is realized by implementing the pilot model. DIRAC’s newest generation of generic pilots (the so-called Pilots 2.0) are the “pilots for all the skies”, and have been successfully released in production more than a year ago. They use a plugin mechanism that makes them easily adaptable. Pilots 2.0 have been used for fetching and running jobs on every type of resource, being it a Worker Node (WN) behind a CREAM/ARC/HTCondor/DIRAC Computing element, a Virtual Machine running on IaaC infrastructures like Vac or BOINC, on IaaS cloud resources managed by Vcycle, the LHCb High Level Trigger farm nodes, and any type of opportunistic computing resource. Make a machine a “Pilot Machine”, and all diversities between them will disappear. This contribution describes how pilots are made suitable for different resources, and the recent steps taken towards a fully unified framework, including monitoring. Also, the cases of multi-processor computing slots either on real or virtual machines, with the whole node or a partition of it, is discussed.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu
2008-01-01
Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays. PMID:18811969
Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu
2008-09-23
Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.
A cryogenic thermal source for detector array characterization
NASA Astrophysics Data System (ADS)
Chuss, David T.; Rostem, Karwan; Wollack, Edward J.; Berman, Leah; Colazo, Felipe; DeGeorge, Martin; Helson, Kyle; Sagliocca, Marco
2017-10-01
We describe the design, fabrication, and validation of a cryogenically compatible quasioptical thermal source for characterization of detector arrays. The source is constructed using a graphite-loaded epoxy mixture that is molded into a tiled pyramidal structure. The mold is fabricated using a hardened steel template produced via a wire electron discharge machining process. The absorptive mixture is bonded to a copper backplate enabling thermalization of the entire structure and measurement of the source temperature. Measurements indicate that the reflectance of the source is <0.001 across a spectral band extending from 75 to 330 GHz.
A Cryogenic Thermal Source for Detector Array Characterization
NASA Technical Reports Server (NTRS)
Chuss, David T.; Rostem, Karwan; Wollack, Edward J.; Berman, Leah; Colazo, Felipe; DeGeorge, Martin; Helson, Kyle; Sagliocca, Marco
2017-01-01
We describe the design, fabrication, and validation of a cryogenically compatible quasioptical thermal source for characterization of detector arrays. The source is constructed using a graphite-loaded epoxy mixture that is molded into a tiled pyramidal structure. The mold is fabricated using a hardened steel template produced via a wire electron discharge machining process. The absorptive mixture is bonded to a copper backplate enabling thermalization of the entire structure and measurement of the source temperature. Measurements indicate that the reflectance of the source is less than 0.001 across a spectral band extending from 75 to 330 gigahertz.
On-line monitoring system of PV array based on internet of things technology
NASA Astrophysics Data System (ADS)
Li, Y. F.; Lin, P. J.; Zhou, H. F.; Chen, Z. C.; Wu, L. J.; Cheng, S. Y.; Su, F. P.
2017-11-01
The Internet of Things (IoT) Technology is used to inspect photovoltaic (PV) array which can greatly improve the monitoring, performance and maintenance of the PV array. In order to efficiently realize the remote monitoring of PV operating environment, an on-line monitoring system of PV array based on IoT is designed in this paper. The system includes data acquisition, data gateway and PV monitoring centre (PVMC) website. Firstly, the DSP-TMS320F28335 is applied to collect indicators of PV array using sensors, then the data are transmitted to data gateway through ZigBee network. Secondly, the data gateway receives the data from data acquisition part, obtains geographic information via GPS module, and captures the scenes around PV array via USB camera, then uploads them to PVMC website. Finally, the PVMC website based on Laravel framework receives all data from data gateway and displays them with abundant charts. Moreover, a fault diagnosis approach for PV array based on Extreme Learning Machine (ELM) is applied in PVMC. Once fault occurs, a user alert can be sent via E-mail. The designed system enables users to browse the operating conditions of PV array on PVMC website, including electrical, environmental parameters and video. Experimental results show that the presented monitoring system can efficiently real-time monitor the PV array, and the fault diagnosis approach reaches a high accuracy of 97.5%.
30 CFR 75.703-2 - Approved grounding mediums.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Approved grounding mediums. 75.703-2 Section 75... mediums. For purposes of grounding offtrack direct-current machines, the following grounding mediums are... alternating current grounding medium where such machines are fed by an ungrounded direct-current power system...
30 CFR 75.703-2 - Approved grounding mediums.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Approved grounding mediums. 75.703-2 Section 75... mediums. For purposes of grounding offtrack direct-current machines, the following grounding mediums are... alternating current grounding medium where such machines are fed by an ungrounded direct-current power system...
30 CFR 75.703-2 - Approved grounding mediums.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Approved grounding mediums. 75.703-2 Section 75... mediums. For purposes of grounding offtrack direct-current machines, the following grounding mediums are... alternating current grounding medium where such machines are fed by an ungrounded direct-current power system...
30 CFR 75.703-2 - Approved grounding mediums.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Approved grounding mediums. 75.703-2 Section 75... mediums. For purposes of grounding offtrack direct-current machines, the following grounding mediums are... alternating current grounding medium where such machines are fed by an ungrounded direct-current power system...
30 CFR 75.703-2 - Approved grounding mediums.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Approved grounding mediums. 75.703-2 Section 75... mediums. For purposes of grounding offtrack direct-current machines, the following grounding mediums are... alternating current grounding medium where such machines are fed by an ungrounded direct-current power system...
Machine Shop. Student Learning Guide.
ERIC Educational Resources Information Center
Palm Beach County Board of Public Instruction, West Palm Beach, FL.
This student learning guide contains eight modules for completing a course in machine shop. It is designed especially for use in Palm Beach County, Florida. Each module covers one task, and consists of a purpose, performance objective, enabling objectives, learning activities and resources, information sheets, student self-check with answer key,…
Computer designed compensation filters for use in radiation therapy. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higgins, R. Jr.
1982-12-01
A computer program was written in the MUMPS language to design filters for use in cancer radiotherapy. The filter corrects for patient surface irregularities and allows homogeneous dose distribution with depth in the patient. The program does not correct for variations in the density of the patient. The program uses data available from the software in Computerized Medical Systems Inc.'s Radiation Treatment Planning package. External contours of General Electric CAT scans are made using the RTP software. The program uses the data from these external contours in designing the compensation filters. The program is written to process from 3 tomore » 31, 1cm thick, CAT scan slices. The output from the program can be in one of two different forms. The first option will drive the probe of a CMS Water Phantom in three dimensions as if it were the bit of a routing machine. Thus a routing machine constructed to run from the same output that drives the Water Phantom probe would produce a three dimensional filter mold. The second option is a listing of thicknesses for an array of aluminum blocks to filter the radiation. The size of the filter array is 10 in. by 10 in. The Printronix printer provides an array of blocks 1/2 in. by 1/2 in. with the thickness in millimeters printed inside each block.« less
Large-area fabrication of patterned ZnO-nanowire arrays using light stamping lithography.
Hwang, Jae K; Cho, Sangho; Seo, Eun K; Myoung, Jae M; Sung, Myung M
2009-12-01
We demonstrate selective adsorption and alignment of ZnO nanowires on patterned poly(dimethylsiloxane) (PDMS) thin layers with (aminopropyl)siloxane self-assembled monolayers (SAMs). Light stamping lithography (LSL) was used to prepare patterned PDMS thin layers as neutral passivation regions on Si substrates. (3-Aminopropyl)triethoxysilane-based SAMs were selectively formed only on regions exposing the silanol groups of the Si substrates. The patterned positively charged amino groups define and direct the selective adsorption of ZnO nanowires with negative surface charges in the protic solvent. This procedure can be adopted in automated printing machines that generate patterned ZnO-nanowire arrays on large-area substrates. To demonstrate its usefulness, the LSL method was applied to prepare ZnO-nanowire transistor arrays on 4-in. Si wafers.
On Machine Capacitance Dimensional and Surface Profile Measurement System
NASA Technical Reports Server (NTRS)
Resnick, Ralph
1993-01-01
A program was awarded under the Air Force Machine Tool Sensor Improvements Program Research and Development Announcement to develop and demonstrate the use of a Capacitance Sensor System including Capacitive Non-Contact Analog Probe and a Capacitive Array Dimensional Measurement System to check the dimensions of complex shapes and contours on a machine tool or in an automated inspection cell. The manufacturing of complex shapes and contours and the subsequent verification of those manufactured shapes is fundamental and widespread throughout industry. The critical profile of a gear tooth; the overall shape of a graphite EDM electrode; the contour of a turbine blade in a jet engine; and countless other components in varied applications possess complex shapes that require detailed and complex inspection procedures. Current inspection methods for complex shapes and contours are expensive, time-consuming, and labor intensive.
Machine-learned and codified synthesis parameters of oxide materials
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Tomala, Alex; Matthews, Sara; Strubell, Emma; Saunders, Adam; McCallum, Andrew; Olivetti, Elsa
2017-09-01
Predictive materials design has rapidly accelerated in recent years with the advent of large-scale resources, such as materials structure and property databases generated by ab initio computations. In the absence of analogous ab initio frameworks for materials synthesis, high-throughput and machine learning techniques have recently been harnessed to generate synthesis strategies for select materials of interest. Still, a community-accessible, autonomously-compiled synthesis planning resource which spans across materials systems has not yet been developed. In this work, we present a collection of aggregated synthesis parameters computed using the text contained within over 640,000 journal articles using state-of-the-art natural language processing and machine learning techniques. We provide a dataset of synthesis parameters, compiled autonomously across 30 different oxide systems, in a format optimized for planning novel syntheses of materials.
Polyphony: A Workflow Orchestration Framework for Cloud Computing
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom
2010-01-01
Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Parametric effects of turning Ti-6Al-4V alloys with aluminum oxide nanolubricants with SDBS
NASA Astrophysics Data System (ADS)
Ali, M. A. M.; Azmi, A. I.; Khalil, A. N. M.
2017-09-01
Applications of nanolubricants have been claimed to improve machinability of aerospace metals due to reduction of friction as a results of the rolling action of billions of nanoparticles at the tool-chip interface. In addition, the need to pursue for an eco-friendly machining has pushed researchers toward implementing alternative lubrication methods through minimal quantity lubrication (MQL). However, the gap in the current literature regarding the performance of nanolubricants via MQL has restricted the widespread use of this lubricant and technique in industries. The present work aims to understand the parametric effects of nanoparticles concentration, cutting speed, feed rate and nozzle angle during machining of titanium alloy, Ti-6AL-4V. Multiple performance of machinability outputs such as surface roughness, tool wear and power consumption were simultaneously determined via Taguchi orthogonal array and grey relational analyses. Prior to machining tests, the nanolubricants stabilities were investigated through the addition of surfactant; sodium dodecyl benzene sulfonate (SDBS). The results clearly indicated that inclusion of SDBS surfactant managed to reduce agglomeration in the base lubricant. Meanwhile, grey relational analyses revealed that the combination of 0.6 % nanoparticles concentration, cutting speed of 85 m/min, feed rate of 0.1 mm/rev and nozzle angle of 60o as desired setting for all the three machining outputs.
Wireless Computing Architecture III
2013-09-01
MIMO Multiple-Input and Multiple-Output MIMO /CON MIMO with concurrent hannel access and estimation MU- MIMO Multiuser MIMO OFDM Orthogonal...compressive sensing \\; a design for concurrent channel estimation in scalable multiuser MIMO networking; and novel networking protocols based on machine...Network, Antenna Arrays, UAV networking, Angle of Arrival, Localization MIMO , Access Point, Channel State Information, Compressive Sensing 16
3D Navier-Stokes Flow Analysis for a Large-Array Multiprocessor
1989-04-17
computer, Alliant’s FX /8, Intel’s Hypercube, and Encore’s Multimax. Unfortunately, the current algorithms have been developed pri- marily for SISD machines...Reversing and Thrust-Vectoring Nozzle Flows," Ph.D. Dissertation in the Dept. of Aero. and Astro ., Univ. of Wash., Washington, 1986. [11] Anderson
50 years of progress in microphone arrays for speech processing
NASA Astrophysics Data System (ADS)
Elko, Gary W.; Frisk, George V.
2004-10-01
In the early 1980s, Jim Flanagan had a dream of covering the walls of a room with microphones. He occasionally referred to this concept as acoustic wallpaper. Being a new graduate in the field of acoustics and signal processing, it was fortunate that Bell Labs was looking for someone to investigate this area of microphone arrays for telecommunication. The job interview was exciting, with all of the big names in speech signal processing and acoustics sitting in the audience, many of whom were the authors of books and articles that were seminal contributions to the fields of acoustics and signal processing. If there ever was an opportunity of a lifetime, this was it. Fortunately, some of the work had already begun, and Sessler and West had already laid the groundwork for directional electret microphones. This talk will describe some of the very early work done at Bell Labs on microphone arrays and reflect on some of the many systems, from large 400-element arrays, to small two-microphone arrays. These microphone array systems were built under Jim Flanagan's leadership in an attempt to realize his vision of seamless hands-free speech communication between people and the communication of people with machines.
SWARM: A 32 GHz Correlator and VLBI Beamformer for the Submillimeter Array
NASA Astrophysics Data System (ADS)
Primiani, Rurik A.; Young, Kenneth H.; Young, André; Patel, Nimesh; Wilson, Robert W.; Vertatschitsch, Laura; Chitwood, Billie B.; Srinivasan, Ranjani; MacMahon, David; Weintroub, Jonathan
2016-03-01
A 32GHz bandwidth VLBI capable correlator and phased array has been designed and deployeda at the Smithsonian Astrophysical Observatory’s Submillimeter Array (SMA). The SMA Wideband Astronomical ROACH2 Machine (SWARM) integrates two instruments: a correlator with 140kHz spectral resolution across its full 32GHz band, used for connected interferometric observations, and a phased array summer used when the SMA participates as a station in the Event Horizon Telescope (EHT) very long baseline interferometry (VLBI) array. For each SWARM quadrant, Reconfigurable Open Architecture Computing Hardware (ROACH2) units shared under open-source from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) are equipped with a pair of ultra-fast analog-to-digital converters (ADCs), a field programmable gate array (FPGA) processor, and eight 10 Gigabit Ethernet (GbE) ports. A VLBI data recorder interface designated the SWARM digital back end, or SDBE, is implemented with a ninth ROACH2 per quadrant, feeding four Mark6 VLBI recorders with an aggregate recording rate of 64 Gbps. This paper describes the design and implementation of SWARM, as well as its deployment at SMA with reference to verification and science data.
Wettability and Contact Time on a Biomimetic Superhydrophobic Surface.
Liang, Yunhong; Peng, Jian; Li, Xiujuan; Huang, Jubin; Qiu, Rongxian; Zhang, Zhihui; Ren, Luquan
2017-03-02
Inspired by the array microstructure of natural superhydrophobic surfaces (lotus leaf and cicada wing), an array microstructure was successfully constructed by high speed wire electrical discharge machining (HS-WEDM) on the surfaces of a 7075 aluminum alloy without any chemical treatment. The artificial surfaces had a high apparent contact angle of 153° ± 1° with a contact angle hysteresis less than 5° and showed a good superhydrophobic property. Wettability, contact time, and the corresponding superhydrophobic mechanism of artificial superhydrophobic surface were investigated. The results indicated that the micro-scale array microstructure was an important factor for the superhydrophobic surface, while different array microstructures exhibited different effects on the wettability and contact time of the artificial superhydrophobic surface. The length ( L ), interval ( S ), and height ( H ) of the array microstructure are the main influential factors on the wettability and contact time. The order of importance of these factors is H > S > L for increasing the apparent contact angle and reducing the contact time. The method, using HS-WEDM to fabricate superhydrophobic surface, is simple, low-cost, and environmentally friendly and can easily control the wettability and contact time on the artificial surfaces by changing the array microstructure.
Wettability and Contact Time on a Biomimetic Superhydrophobic Surface
Liang, Yunhong; Peng, Jian; Li, Xiujuan; Huang, Jubin; Qiu, Rongxian; Zhang, Zhihui; Ren, Luquan
2017-01-01
Inspired by the array microstructure of natural superhydrophobic surfaces (lotus leaf and cicada wing), an array microstructure was successfully constructed by high speed wire electrical discharge machining (HS-WEDM) on the surfaces of a 7075 aluminum alloy without any chemical treatment. The artificial surfaces had a high apparent contact angle of 153° ± 1° with a contact angle hysteresis less than 5° and showed a good superhydrophobic property. Wettability, contact time, and the corresponding superhydrophobic mechanism of artificial superhydrophobic surface were investigated. The results indicated that the micro-scale array microstructure was an important factor for the superhydrophobic surface, while different array microstructures exhibited different effects on the wettability and contact time of the artificial superhydrophobic surface. The length (L), interval (S), and height (H) of the array microstructure are the main influential factors on the wettability and contact time. The order of importance of these factors is H > S > L for increasing the apparent contact angle and reducing the contact time. The method, using HS-WEDM to fabricate superhydrophobic surface, is simple, low-cost, and environmentally friendly and can easily control the wettability and contact time on the artificial surfaces by changing the array microstructure. PMID:28772613
Scale effects and a method for similarity evaluation in micro electrical discharge machining
NASA Astrophysics Data System (ADS)
Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua
2016-08-01
Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.
Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics
NASA Astrophysics Data System (ADS)
Barletta, William A.
2012-02-01
This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.
Investigation of Implantable Multi-Channel Electrode Array in Rat Cerebral Cortex Used for Recording
NASA Astrophysics Data System (ADS)
Taniguchi, Noriyuki; Fukayama, Osamu; Suzuki, Takafumi; Mabuchi, Kunihiko
There have recently been many studies concerning the control of robot movements using neural signals recorded from the brain (usually called the Brain-Machine interface (BMI)). We fabricated implantable multi-electrode arrays to obtain neural signals from the rat cerebral cortex. As any multi-electrode array should have electrode alignment that minimizes invasion, it is necessary to customize the recording site. We designed three types of 22-channel multi-electrode arrays, i.e., 1) wide, 2) three-layered, and 3) separate. The first extensively covers the cerebral cortex. The second has a length of 2 mm, which can cover the area of the primary motor cortex. The third array has a separate structure, which corresponds to the position of the forelimb and hindlimb areas of the primary motor cortex. These arrays were implanted into the cerebral cortex of a rat. We estimated the walking speed from neural signals using our fabricated three-layered array to investigate its feasibility for BMI research. The neural signal of the rat and its walking speed were simultaneously recorded. The results revealed that evaluation using either the anterior electrode group or posterior group provided accurate estimates. However, two electrode groups around the center yielded poor estimates although it was possible to record neural signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiurasek, Jaromir; Cerf, Nicolas J.
We investigate the asymmetric Gaussian cloning of coherent states which produces M copies from N input replicas in such a way that the fidelity of each copy may be different. We show that the optimal asymmetric Gaussian cloning can be performed with a single phase-insensitive amplifier and an array of beam splitters. We obtain a simple analytical expression characterizing the set of optimal asymmetric Gaussian cloning machines and prove the optimality of these cloners using the formalism of Gaussian completely positive maps and semidefinite programming techniques. We also present an alternative implementation of the asymmetric cloning machine where the phase-insensitivemore » amplifier is replaced with a beam splitter, heterodyne detector, and feedforward.« less
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig
2015-01-01
Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km2 in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743
A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations
Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang
2008-01-01
Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033
Burner Rig in the Material and Stresses Building
1969-11-21
A burner rig heats up a material sample in the Materials and Stresses Building at the National Aeronautics and Space Administration (NASA) Lewis Research Center. Materials technology is an important element in the successful development of advanced airbreathing and rocket propulsion systems. Different types of engines operate in different environments so an array of dependable materials is needed. NASA Lewis began investigating the characteristics of different materials shortly after World War II. In 1949 the materials group was expanded into its own division. The Lewis researchers sought to study and test materials in environments that simulate the environment in which they would operate. The Materials and Stresses Building, built in 1949, contained a number of laboratories to analyze the materials. They are subjected to high temperatures, high stresses, corrosion, irradiation, and hot gasses. The Physics of Solids Laboratory included a cyclotron, cloud chamber, helium cryostat, and metallurgy cave. The Metallographic Laboratory possessed six x-ray diffraction machines, two metalloscopes, and other equipment. The Furnace Room had two large induction machines, a 4500⁰ F graphite furnace, and heat treating equipment. The Powder Laboratory included 60-ton and 3000-ton presses. The Stresses Laboratory included stress rupture machines, fatigue machines, and tensile strength machines.
30 CFR 57.12088 - Splicing trailing cables.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Splicing trailing cables. 57.12088 Section 57... Underground Only § 57.12088 Splicing trailing cables. No splice, except a vulcanized splice or its equivalent, shall be made in a trailing cable within 25 feet of the machine unless the machine is equipped with a...
Laubinger, Sascha; Zeller, Georg; Henz, Stefan R; Sachsenberg, Timo; Widmer, Christian K; Naouar, Naïra; Vuylsteke, Marnik; Schölkopf, Bernhard; Rätsch, Gunnar; Weigel, Detlef
2008-01-01
Gene expression maps for model organisms, including Arabidopsis thaliana, have typically been created using gene-centric expression arrays. Here, we describe a comprehensive expression atlas, Arabidopsis thaliana Tiling Array Express (At-TAX), which is based on whole-genome tiling arrays. We demonstrate that tiling arrays are accurate tools for gene expression analysis and identified more than 1,000 unannotated transcribed regions. Visualizations of gene expression estimates, transcribed regions, and tiling probe measurements are accessible online at the At-TAX homepage. PMID:18613972
NASA Astrophysics Data System (ADS)
Grandi, C.; Italiano, A.; Salomoni, D.; Calabrese Melcarne, A. K.
2011-12-01
WNoDeS, an acronym for Worker Nodes on Demand Service, is software developed at CNAF-Tier1, the National Computing Centre of the Italian Institute for Nuclear Physics (INFN) located in Bologna. WNoDeS provides on demand, integrated access to both Grid and Cloud resources through virtualization technologies. Besides the traditional use of computing resources in batch mode, users need to have interactive and local access to a number of systems. WNoDeS can dynamically select these computers instantiating Virtual Machines, according to the requirements (computing, storage and network resources) of users through either the Open Cloud Computing Interface API, or through a web console. An interactive use is usually limited to activities in user space, i.e. where the machine configuration is not modified. In some other instances the activity concerns development and testing of services and thus implies the modification of the system configuration (and, therefore, root-access to the resource). The former use case is a simple extension of the WNoDeS approach, where the resource is provided in interactive mode. The latter implies saving the virtual image at the end of each user session so that it can be presented to the user at subsequent requests. This work describes how the LHC experiments at INFN-Bologna are testing and making use of these dynamically created ad-hoc machines via WNoDeS to support flexible, interactive analysis and software development at the INFN Tier-1 Computing Centre.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Garzoglio, Gabriele; Ren, Shangping
FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VMmore » launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.« less
Kolusheva, S; Yossef, R; Kugel, A; Katz, M; Volinsky, R; Welt, M; Hadad, U; Drory, V; Kliger, M; Rubin, E; Porgador, A; Jelinek, R
2012-07-17
We demonstrate a novel array-based diagnostic platform comprising lipid/polydiacetylene (PDA) vesicles embedded within a transparent silica-gel matrix. The diagnostic scheme is based upon the unique chromatic properties of PDA, which undergoes blue-red transformations induced by interactions with amphiphilic or membrane-active analytes. We show that constructing a gel matrix array hosting PDA vesicles with different lipid compositions and applying to blood plasma obtained from healthy individuals and from patients suffering from disease, respectively, allow distinguishing among the disease conditions through application of a simple machine-learning algorithm, using the colorimetric response of the lipid/PDA/gel matrix as the input. Importantly, the new colorimetric diagnostic approach does not require a priori knowledge on the exact metabolite compositions of the blood plasma, since the concept relies only on identifying statistically significant changes in overall disease-induced chromatic response. The chromatic lipid/PDA/gel array-based "fingerprinting" concept is generic, easy to apply, and could be implemented for varied diagnostic and screening applications.
Analysis of labor employment assessment on production machine to minimize time production
NASA Astrophysics Data System (ADS)
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori
2014-03-01
The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Supercomputing on massively parallel bit-serial architectures
NASA Technical Reports Server (NTRS)
Iobst, Ken
1985-01-01
Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.
Design description of the Schuchuli Village photovoltaic power system
NASA Technical Reports Server (NTRS)
Ratajczak, A. F.; Vasicek, R. W.; Delombard, R.
1981-01-01
A stand alone photovoltaic (PV) power system for the village of Schuchuli (Gunsight), Arizona, on the Papago Indian Reservation is a limited energy, all 120 V (d.c.) system to which loads cannot be arbitrarily added and consists of a 3.5 kW (peak) PV array, 2380 ampere-hours of battery storage, an electrical equipment building, a 120 V (d.c.) electrical distribution network, and equipment and automatic controls to provide control power for pumping water into an existing water system; operating 15 refrigerators, a clothes washing machine, a sewing machine, and lights for each of the homes and communal buildings. A solar hot water heater supplies hot water for the washing machine and communal laundry. Automatic control systems provide voltage control by limiting the number of PV strings supplying power during system operation and battery charging, and load management for operating high priority at the expense of low priority loads as the main battery becomes depleted.
Toolpath strategy for cutter life improvement in plunge milling of AISI H13 tool steel
NASA Astrophysics Data System (ADS)
Adesta, E. Y. T.; Avicenna; hilmy, I.; Daud, M. R. H. C.
2018-01-01
Machinability of AISI H13 tool steel is a prominent issue since the material has the characteristics of high hardenability, excellent wear resistance, and hot toughness. A method of improving cutter life of AISI H13 tool steel plunge milling by alternating the toolpath and cutting conditions is proposed. Taguchi orthogonal array with L9 (3^4) resolution will be employed with one categorical factor of toolpath strategy (TS) and three numeric factors of cutting speed (Vc), radial depth of cut (ae ), and chip load (fz ). It is expected that there are significant differences for each application of toolpath strategy and each cutting condition factor toward the cutting force and tool wear mechanism of the machining process, and medial axis transform toolpath could provide a better tool life improvement by a reduction of cutting force during machining.
1992-05-01
methodology, knowledge acquisition, 140 requirements definition, information systems, information engineering, 16. PRICE CODE systems engineering...and knowledge resources. Like manpower, materials, and machines, information and knowledge assets are recognized as vital resources that can be...evolve towards an information -integrated enterprise. These technologies are designed to leverage information and knowledge resources as the key
A new concept of imaging system: telescope windows
NASA Astrophysics Data System (ADS)
Bourgenot, Cyril; Cowie, Euan; Young, Laura; Love, Gordon; Girkin, John; Courtial, Johannes
2018-02-01
A Telescope window is a novel concept of transformation-optics consisting of an array of micro-telescopes, in our configuration, of a Galilean type. When the array is considered as one multifaceted device, it acts as a traditional Galilean telescope with distinctive and attractive properties such as compactness and modularity. Each lenslet, can in principle, be independently designed for a specific optical function. In this paper, we report on the design, manufacture and prototyping, by diamond precision machining, of 2 concepts of telescope windows, and discuss both their performances and limitations with a view to use them as potential low vision aid devices to support patients with macular degeneration.
Reference Model MHK Turbine Array Optimization Study within a Generic River System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Erick; Barco Mugg, Janet; James, Scott
2011-12-01
Increasing interest in marine hydrokinetic (MHK) energy has spurred to significant research on optimal placement of emerging technologies to maximize energy conversion and minimize potential effects on the environment. However, these devices will be deployed as an array in order to reduce the cost of energy and little work has been done to understand the impact these arrays will have on the flow dynamics, sediment-bed transport and benthic habitats and how best to optimize these arrays for both performance and environmental considerations. An "MHK-friendly" routine has been developed and implemented by Sandia National Laboratories (SNL) into the flow, sediment dynamicsmore » and water-quality code, SNL-EFDC. This routine has been verified and validated against three separate sets of experimental data. With SNL-EFDC, water quality and array optimization studies can be carried out to optimize an MHK array in a resource and study its effects on the environment. The present study examines the effect streamwise and spanwise spacing has on the array performance. Various hypothetical MHK array configurations are simulated within a trapezoidal river channel. Results show a non-linear increase in array-power efficiency as turbine spacing is increased in each direction, which matches the trends seen experimentally. While the sediment transport routines were not used in these simulations, the flow acceleration seen around the MHK arrays has the potential to significantly affect the sediment transport characteristics and benthic habitat of a resource. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd« less
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
Divide and Recombine for Large Complex Data
2017-12-01
Empirical Methods in Natural Language Processing , October 2014 Keywords Enter keywords for the publication. URL Enter the URL...low-latency data processing systems. Declarative Languages for Interactive Visualization: The Reactive Vega Stack Another thread of XDATA research...for array processing operations embedded in the R programming language . Vector virtual machines work well for long vectors. One of the most
Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine.
Greer, Andrew Im; Della-Rosa, Benoit; Khokhar, Ali Z; Gadegaard, Nikolaj
2016-12-01
The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm(2) of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.
NASA Astrophysics Data System (ADS)
Maity, Kalipada; Pradhan, Swastik
2018-04-01
In this study, machining of titanium alloy (grade 5) is carried out using MT-CVD coated cutting tool. Titanium alloys possess superior strength-to-weight ratio with good corrosion resistance. Most of the industries used titanium alloy for the manufacturing of various types of lightweight components. The parts made from Ti-6Al-4V largely used in aerospace, biomedical, automotive and marine sectors. The conventional machining of this material is very difficult, due to low thermal conductivity and high chemical reactivity properties. To achieve a good surface finish with minimum tool wear of cutting tool, the machining is carried out using MT-CVD coated cutting tool. The experiment is carried out using of Taguchi L27 array layout with three cutting variables and levels. To find out the optimum parametric setting desirability function analysis (DFA) approach is used. The analysis of variance is studied to know the percentage contribution of each cutting variables. The optimum parametric setting results calculated from DFA were validated through the confirmation test.
NASA Astrophysics Data System (ADS)
Khan, Akhtar; Maity, Kalipada
2018-03-01
This paper explores some of the vital machinability characteristics of commercially pure titanium (CP-Ti) grade 2. Experiments were conducted based on Taguchi’s L9 orthogonal array. The selected material was machined on a heavy duty lathe (Model: HMT NH26) using uncoated carbide inserts in dry cutting environment. The selected inserts were designated by ISO as SNMG 120408 (Model: K313) and manufactured by Kennametal. These inserts were rigidly mounted on a right handed tool holder PSBNR 2020K12. Cutting speed, feed rate and depth of cut were selected as three input variables whereas tool wear (VBc) and surface roughness (Ra) were the major attentions. In order to confirm an appreciable machinability of the work part, an optimal parametric combination was attained with the help of grey relational analysis (GRA) approach. Finally, a mathematical model was developed to exhibit the accuracy and acceptability of the proposed methodology using multiple regression equations. The results indicated that, the suggested model is capable of predicting overall grey relational grade within acceptable range.
Step-and-Repeat Nanoimprint-, Photo- and Laser Lithography from One Customised CNC Machine
NASA Astrophysics Data System (ADS)
Greer, Andrew IM; Della-Rosa, Benoit; Khokhar, Ali Z.; Gadegaard, Nikolaj
2016-03-01
The conversion of a computer numerical control machine into a nanoimprint step-and-repeat tool with additional laser- and photolithography capacity is documented here. All three processes, each demonstrated on a variety of photoresists, are performed successfully and analysed so as to enable the reader to relate their known lithography process(es) to the findings. Using the converted tool, 1 cm2 of nanopattern may be exposed in 6 s, over 3300 times faster than the electron beam equivalent. Nanoimprint tools are commercially available, but these can cost around 1000 times more than this customised computer numerical control (CNC) machine. The converted equipment facilitates rapid production and large area micro- and nanoscale research on small grants, ultimately enabling faster and more diverse growth in this field of science. In comparison to commercial tools, this converted CNC also boasts capacity to handle larger substrates, temperature control and active force control, up to ten times more curing dose and compactness. Actual devices are fabricated using the machine including an expanded nanotopographic array and microfluidic PDMS Y-channel mixers.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
Thermodynamic analysis of resources used in manufacturing processes.
Gutowski, Timothy G; Branham, Matthew S; Dahmus, Jeffrey B; Jones, Alissa J; Thiriez, Alexandre
2009-03-01
In this study we use a thermodynamic framework to characterize the material and energy resources used in manufacturing processes. The analysis and data span a wide range of processes from "conventional" processes such as machining, casting, and injection molding, to the so-called "advanced machining" processes such as electrical discharge machining and abrasive waterjet machining, and to the vapor-phase processes used in semiconductor and nanomaterials fabrication. In all, 20 processes are analyzed. The results show that the intensity of materials and energy used per unit of mass of material processed (measured either as specific energy or exergy) has increased by at least 6 orders of magnitude over the past several decades. The increase of material/energy intensity use has been primarily a consequence of the introduction of new manufacturing processes, rather than changes in traditional technologies. This phenomenon has been driven by the desire for precise small-scale devices and product features and enabled by stable and declining material and energy prices over this period. We illustrate the relevance of thermodynamics (including exergy analysis) for all processes in spite of the fact that long-lasting focus in manufacturing has been on product quality--not necessarily energy/material conversion efficiency. We promote the use of thermodynamics tools for analysis of manufacturing processes within the context of rapidly increasing relevance of sustainable human enterprises. We confirm that exergy analysis can be used to identify where resources are lost in these processes, which is the first step in proposing and/or redesigning new more efficient processes.
Evaluating open-source cloud computing solutions for geosciences
NASA Astrophysics Data System (ADS)
Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong
2013-09-01
Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.
The Next Era: Deep Learning in Pharmaceutical Research.
Ekins, Sean
2016-11-01
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule's properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique.
Allocating scarce medical resources to the overweight.
Furnham, Adrian; Loganathan, Niroosha; McClelland, Alastair
2010-01-01
A programmatic research effort investigated how lay people weigh information on hypothetical patients when making decisions regarding the allocation of scarce medical resources. This study is partly replicative and partly innovative, and looks particularly at whether overweight patients would be discriminated against in allocating resources. This study aims to determine the importance given to specific patient characteristics when lay participants are asked to allocate scarce medical resources. In all, 156 British adults (82 males, 73 females), aged 19 to 84 years, took part. There were few students. Participants completed a questionnaire requiring them to rank 16 hypothetical patients for access to a kidney dialysis machine.The demographic information presented regarding each hypothetical patient differed on four dimensions: gender, weight, mental health, and religiousness. There were significant main effects for gender, weight, and mental health; females, patients of normal weight, and the mentally well were ranked the highest priority for access to a kidney dialysis machine. Participants discriminated most regarding the weight of hypothetical patients. Different patient characteristics, unrelated to medical prognoses, particularly being overweight, may have an impact on decisions regarding the use of scarce medical resources.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Automated planning for intelligent machines in energy-related applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; de Saussure, G.; Barhen, J.
1984-01-01
This paper discusses the current activities of the Center for Engineering Systems Advanced Research (CESAR) program related to plan generation and execution by an intelligent machine. The system architecture for the CESAR mobile robot (named HERMIES-1) is described. The minimal cut-set approach is developed to reduce the tree search time of conventional backward chaining planning techniques. Finally, a real-time concept of an Intelligent Machine Operating System is presented in which planning and reasoning is embedded in a system for resource allocation and process management.
1995-09-01
vital processes of a business. process, IDEF, method, methodology, modeling, knowledge acquisition, requirements definition, information systems... knowledge resources. Like manpower, materials, and machines, information and knowledge assets are recognized as vital resources that can be leveraged to...integrated enterprise. These technologies are designed to leverage information and knowledge resources as the key enablers for high quality systems
Information Integration for Concurrent Engineering (IICE) Compendium of Methods Report
1995-06-01
technological, economic, and strategic benefits can be attained through the effective capture, control, and management of information and knowledge ...resources. Like manpower, materials, and machines, information and knowledge assets are recognized as vital resources that can be leveraged to achieve...integrated enterprise. These technologies are designed to leverage information and knowledge resources as the key enablers for high quality systems that
RESTful M2M Gateway for Remote Wireless Monitoring for District Central Heating Networks
Cheng, Bo; Wei, Zesan
2014-01-01
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented. PMID:25436650
RESTful M2M gateway for remote wireless monitoring for district central heating networks.
Cheng, Bo; Wei, Zesan
2014-11-27
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented.
Resources on Law-Related Education: Documents and Journal Articles in ERIC. Yearbook No. 3.
ERIC Educational Resources Information Center
Healy, Langdon T., Ed.; Vontz, Thomas S., Ed.
This ERIC resource is a guide to the array of law-related education (LRE) resources available to teachers. The annotated bibliography offers resources for essential knowledge of the law, innovative teaching methods, and guides to national LRE programs. Included in this collection are abstracts of LRE documents and journal articles, arranged…
NASA Astrophysics Data System (ADS)
Pierce, S. A.
2017-12-01
Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case studies to highlight how Cloud CI streamlines the process for setting up an interactive decision support system. Moreover, advances in artificial intelligence offer new techniques for old problems from integrating data to adaptive sensing or from interactive dashboards to optimizing multi-attribute problems. The combination of scientific expertise, flexible cloud computing solutions, and intelligent systems opens new research horizons.
NASA Astrophysics Data System (ADS)
Liu, Xiaohua; Zhou, Tianfeng; Zhang, Lin; Zhou, Wenchen; Yu, Jianfeng; Lee, L. James; Yi, Allen Y.
2018-07-01
Silicon is a promising mold material for compression molding because of its properties of hardness and abrasion resistance. Silicon wafers with carbide-bonded graphene coating and micro-patterns were evaluated as molds for the fabrication of microlens arrays. This study presents an efficient but flexible manufacturing method for microlens arrays that combines a lapping method and a rapid molding procedure. Unlike conventional processes for microstructures on silicon wafers, such as diamond machining and photolithography, this research demonstrates a unique approach by employing precision steel balls and diamond slurries to create microlenses with accurate geometry. The feasibility of this method was demonstrated by the fabrication of several microlens arrays with different aperture sizes and pitches on silicon molds. The geometrical accuracy and surface roughness of the microlens arrays were measured using an optical profiler. The measurement results indicated good agreement with the optical profile of the design. The silicon molds were then used to copy the microstructures onto polymer substrates. The uniformity and quality of the samples molded through rapid surface molding were also assessed and statistically quantified. To further evaluate the optical functionality of the molded microlens arrays, the focal lengths of the microlens arrays were measured using a simple optical setup. The measurements showed that the microlens arrays molded in this research were compatible with conventional manufacturing methods. This research demonstrated an alternative low-cost and efficient method for microstructure fabrication on silicon wafers, together with the follow-up optical molding processes.
Characterisation of the current switch mechanism in two-stage wire array Z-pinches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdiak, G. C.; Lebedev, S. V.; Harvey-Thompson, A. J.
2015-11-15
In this paper, we describe the operation of a two-stage wire array z-pinch driven by the 1.4 MA, 240 ns rise-time Magpie pulsed-power device at Imperial College London. In this setup, an inverse wire array acts as a fast current switch, delivering a current pre-pulse into a cylindrical load wire array, before rapidly switching the majority of the generator current into the load after a 100–150 ns dwell time. A detailed analysis of the evolution of the load array during the pre-pulse is presented. Measurements of the load resistivity and energy deposition suggest significant bulk heating of the array mass occurs. Themore » ∼5 kA pre-pulse delivers ∼0.8 J of energy to the load, leaving it in a mixed, predominantly liquid-vapour state. The main current switch occurs as the inverse array begins to explode and plasma expands into the load region. Electrical and imaging diagnostics indicate that the main current switch may evolve in part as a plasma flow switch, driven by the expansion of a magnetic cavity and plasma bubble along the length of the load array. Analysis of implosion trajectories suggests that approximately 1 MA switches into the load in 100 ns, corresponding to a doubling of the generator dI/dt. Potential scaling of the device to higher current machines is discussed.« less
Fabrication of Metallic Quantum Dot Arrays For Nanoscale Nonlinear Optics
NASA Astrophysics Data System (ADS)
McMahon, M. D.; Hmelo, A. B.; Lopez Magruder, R., III; Weller Haglund, R. A., Jr.; Feldman, L. C.
2003-03-01
Ordered arrays of metal nanocrystals embedded in or sequestered on dielectric hosts have potential applications as elements of nonlinear or near-field optical circuits, as sensitizers for fluorescence emitters and photo detectors, and as anchor points for arrays of biological molecules. Metal nanocrystals are strongly confined electronic systems with size-, shape and spatial orientation-dependent optical responses. At the smallest scales (below about 15 nm diameter), their band structure is drastically altered by the small size of the system, and the reduced population of conduction-band electrons. Here we report on the fabrication of two-dimensional ordered metallic nanocrystal arrays, and one-dimensional nanocrystal-loaded waveguides for optical investigations. We have employed strategies for synthesizing metal nanocrystal composites that capitalize on the best features of focused ion beam (FIB) machining and pulsed laser deposition (PLD). The FIB generates arrays of specialized sites; PLD vapor deposition results in the directed self-assembly of Ag nanoparticles nucleated at the FIB generated sites on silicon substrates. We present results based on the SEM, AFM and optical characterization of prototype composites. This research has been supported by the U.S. Department of Energy under grant DE-FG02-01ER45916.
Simulation of an array-based neural net model
NASA Technical Reports Server (NTRS)
Barnden, John A.
1987-01-01
Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.
A force transmission system based on a tulip-shaped electrostatic clutch for haptic display devices
NASA Astrophysics Data System (ADS)
Sasaki, Hikaru; Shikida, Mitsuhiro; Sato, Kazuo
2006-12-01
This paper describes a novel type of force transmission system for haptic display devices. The system consists of an array of end-effecter elements, a force/displacement transmitter and a single actuator producing a large force/displacement. It has tulip-shaped electrostatic clutch devices to distribute the force/displacement from the actuator among the individual end effecters. The specifications of three components were determined to stimulate touched human fingers. The components were fabricated by using micro-electromechanical systems and conventional machining technologies, and finally they were assembled by hand. The performance of the assembled transmission system was experimentally examined and it was confirmed that each projection in the arrayed end effecters could be moved individually. The actuator in a system whose total size was only 3.0 cm × 3.0 cm × 4.0 cm produced a 600 mN force and displaced individual array elements by 18 µm.
Microfabricated Fountain Pens for High-Density DNA Arrays
Reese, Matthew O.; van Dam, R. Michae; Scherer, Axel; Quake, Stephen R.
2003-01-01
We used photolithographic microfabrication techniques to create very small stainless steel fountain pens that were installed in place of conventional pens on a microarray spotter. Because of the small feature size produced by the microfabricated pens, we were able to print arrays with up to 25,000 spots/cm2, significantly higher than can be achieved by other deposition methods. This feature density is sufficiently large that a standard microscope slide can contain multiple replicates of every gene in a complex organism such as a mouse or human. We tested carryover during array printing with dye solution, labeled DNA, and hybridized DNA, and we found it to be indistinguishable from background. Hybridization also showed good sequence specificity to printed oligonucleotides. In addition to improved slide capacity, the microfabrication process offers the possibility of low-cost mass-produced pens and the flexibility to include novel pen features that cannot be machined with conventional techniques. PMID:12975313
Versioned distributed arrays for resilience in scientific applications: Global view resilience
Chien, A.; Balaji, P.; Beckman, P.; ...
2015-06-01
Exascale studies project reliability challenges for future high-performance computing (HPC) systems. We propose the Global View Resilience (GVR) system, a library that enables applications to add resilience in a portable, application-controlled fashion using versioned distributed arrays. We describe GVR’s interfaces to distributed arrays, versioning, and cross-layer error recovery. Using several large applications (OpenMC, the preconditioned conjugate gradient solver PCG, ddcMD, and Chombo), we evaluate the programmer effort to add resilience. The required changes are small (<2% LOC), localized, and machine-independent, requiring no software architecture changes. We also measure the overhead of adding GVR versioning and show that generally overheads <2%more » are achieved. We conclude that GVR’s interfaces and implementation are flexible and portable and create a gentle-slope path to tolerate growing error rates in future systems.« less
Production scheduling with discrete and renewable additional resources
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Grabowik, C.; Paprocka, I.; Kempa, W.
2015-11-01
In this paper an approach to planning of additional resources when scheduling operations are discussed. The considered resources are assumed to be discrete and renewable. In most research in scheduling domain, the basic and often the only type of regarded resources is a workstation. It can be understood as a machine, a device or even as a separated space on the shop floor. In many cases, during the detailed scheduling of operations the need of using more than one resource, required for its implementation, can be indicated. Resource requirements for an operation may relate to different resources or resources of the same type. Additional resources are most often referred to these human resources, tools or equipment, for which the limited availability in the manufacturing system may have an influence on the execution dates of some operations. In the paper the concept of the division into basic and additional resources and their planning method was shown. A situation in which sets of basic and additional resources are not separable - the same additional resource may be a basic resource for another operation is also considered. Scheduling of operations, including greater amount of resources can cause many difficulties, depending on whether the resource is involved in the entire time of operation, only in the selected part(s) of operation (e.g. as auxiliary staff at setup time) or cyclic - e.g. when an operator supports more than one machine, or supervises the execution of several operations. For this reason the dates and work times of resources participation in the operation can be different. Presented issues are crucial when modelling of production scheduling environment and designing of structures for the purpose of scheduling software development.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
Turbine rotor-stator leaf seal and related method
Herron, William Lee; Butkiewicz, Jeffrey John
2003-01-01
A seal assembly for installation between rotating and stationary components of a machine includes a first plurality of leaf spring segments secured to the stationary component in a circumferential array surrounding the rotating component, the leaf spring segments each having a radial mounting portion and a substantially axial sealing portion, the plurality of leaf spring segments shingled in a circumferential direction.
European Seminar on Neural Computing
1988-08-31
elements can be fabricated on a single chip . Two specific oriented language (for example, SMALLTALK or cellular arrays, namely, the programmable systolic... chip POOL) the basic concepts are: objects are viewed as (Fisher, 1983) and the connection machine (Treleaven, active, they may contain state, and...flow computer the availability of 1. Programmable Systolic Chip . Programmable Sys- input operands triggers the execution of the instruction tolic Chips
Strain-tolerant ceramic coated seal
Schienle, James L.; Strangman, Thomas E.
1994-01-01
A metallic regenerator seal is provided having multi-layer coating comprising a NiCrAlY bond layer, a yttria stabilized zirconia (YSZ) intermediate layer, and a ceramic high temperature solid lubricant surface layer comprising zinc oxide, calcium fluoride, and tin oxide. An array of discontinuous grooves is laser machined into the outer surface of the solid lubricant surface layer making the coating strain tolerant.
Salvado, José; Espírito-Santo, António; Calado, Maria
2012-01-01
This paper proposes a distributed system for analysis and monitoring (DSAM) of vibrations and acoustic noise, which consists of an array of intelligent modules, sensor modules, communication bus and a host PC acting as data center. The main advantages of the DSAM are its modularity, scalability, and flexibility for use of different type of sensors/transducers, with analog or digital outputs, and for signals of different nature. Its final cost is also significantly lower than other available commercial solutions. The system is reconfigurable, can operate either with synchronous or asynchronous modes, with programmable sampling frequencies, 8-bit or 12-bit resolution and a memory buffer of 15 kbyte. It allows real-time data-acquisition for signals of different nature, in applications that require a large number of sensors, thus it is suited for monitoring of vibrations in Linear Switched Reluctance Actuators (LSRAs). The acquired data allows the full characterization of the LSRA in terms of its response to vibrations of structural origins, and the vibrations and acoustic noise emitted under normal operation. The DSAM can also be used for electrical machine condition monitoring, machine fault diagnosis, structural characterization and monitoring, among other applications. PMID:22969364
Wire array K-shell sources on the SPHINX generator
NASA Astrophysics Data System (ADS)
D'Almeida, Thierry; Lassalle, Francis; Grunenwald, Julien; Maury, Patrick; Zucchini, Frédéric; Niasse, Nicolas; Chittenden, Jeremy
2014-10-01
The SPHINX machine is a LTD based Z-pinch driver operated by the CEA Gramat (France) and primarily used for studying K-shell radiation effects. We present the results of experiments carried out with single and nested large diameter aluminium wire array loads driven by a current of ~5 MA in ~800 ns. The dynamic of the implosion is studied with filtered X-UV time-integrated pin-hole cameras. The plasma electron temperature and the characteristics of the sources are estimated with time and spatially dependent spectrographs and PCDs. It is shown that Al K-shell yields (>1 keV) up to 27 kJ are obtained for a total radiation of ~ 230 kJ. These results are compared with simulations performed using the latest implementation of the non-LTE DCA code Spk in the 3D Eulerian MHD framework Gorgon developed at Imperial College. Filtered synthetic bolometers and PCD signals, time-dependent spatially integrated spectra and X-UV images are produced and show a good agreement with the experimental data. The capabilities of a prospective SPHINX II machine (20 MA ~ 800 ns) are also assessed for a wider variety of sources (Ti, Cu and W).
Torque Production in a Halbach Machine
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.; Gallo, Christopher A.; Thompson, William K.; Vrnak, Daniel R.
2006-01-01
The NASA John H. Glenn Research Center initiated the investigation of torque production in a Halbach machine for the Levitated Ducted Fan (LDF) Project to obtain empirical data in determining the feasibility of using a Halbach motor for the project. LDF is a breakthrough technology for "Electric Flight" with the development of a clean, quiet, electric propulsor system. Benefits include zero emissions, decreased dependence on fossil fuels, increased efficiency, increased reliability, reduced maintenance, and decreased operating noise levels. A commercial permanent magnet brushless motor rotor was tested with a custom stator. An innovative rotor utilizing a Halbach array was designed and developed to fit directly into the same stator. The magnets are oriented at 90deg to the adjacent magnet, which cancels the magnetic field on the inside of the rotor and strengthens the field on the outside of the rotor. A direct comparison of the commercial rotor and the Halbach rotor was made. In addition, various test models were designed and developed to validate the basic principles described, and the theoretical work that was performed. The report concludes that a Halbach array based motor can provide significant improvements in electric motor performance and reliability.
NASA Astrophysics Data System (ADS)
Manzi, M. S.; Webb, S. J.; Durrheim, R. J.; Gibson, R.
2016-12-01
The African continent is endowed with a wealth of resources that are the focus of vigorous exploration by international mining companies. However, it is unfortunate that many African countries have been unable to capitalize on resource development due to a lack of expertise in research, exploration, resource management and develop their mineral deposits. The capacity to develop natural resources in Africa is, inextricably linked to the ability to fully develop intellectual capacity. Thus, training young African geoscientists to investigate and manage Africa's natural resources, and developing scientific programs about Africa resources, their settings, controls and origins, should lie at the heart of all African universities. Ten years in to the AfricaArray program, it is worth reviewing some of the insights and successes we have gained. In Africa, there is a lack of knowledge of what a "scientist" is and University is often viewed as a continuation of high school. With no real exposure to research, students don't understand the huge difference between high school and university, and they treat the university as a high school. One way to mitigate this may be to include undergraduate research opportunities in the summer break but funding is difficult to allocate. This observation highlights the need to critically review our approach to research, teaching and learning, and social engagement at school level. At University level a key focus has been the development of capacity through international collaborative research and training. The School of Geosciences, at Wits University, is already the leading institution in Africa for its breadth of geosciences research and training, and the applied nature of its research, being ranked in the top 1% of institutions worldwide in its field. It is currently a lead partner in flagship international research geophysics programme focused on Africa - the AfricaArray Field School and AfricaArray Programme. Field school has spawned other developing field schools throughout Africa.
ERIC Educational Resources Information Center
Nickerson, Gord
1991-01-01
Describes the use and applications of the communications program Telenet for remote log-in, a basic interactive resource sharing service that enables users to connect to any machine on the Internet and conduct a session. The Virtual Terminal--the central component of Telenet--is also described, as well as problems with terminals, services…
Materials Development for Auxiliary Components for Large Compact Mo/Au TES Arrays
NASA Technical Reports Server (NTRS)
Finkbeiner, F. m.; Chervenak, J. A.; Bandler, S. R.; Brekosky, R.; Brown, A. D.; Figueroa-Feliciano, E.; Iyomoto, N.; Kelley, R. L.; Kilbourne, C. A.; Porter, F. S.;
2007-01-01
We describe our current fabrication process for arrays of superconducting transition edge sensor microcalorimeters, which incorporates superconducting Mo/Au bilayers and micromachined silicon structures. We focus on materials and integration methods for array heatsinking with our bilayer and micromachining processes. The thin superconducting molybdenum bottom layer strongly influences the superconducting behavior and overall film characteristics of our molybdenum/gold transition-edge sensors (TES). Concurrent with our successful TES microcalorimeter array development, we have started to investigate the thin film properties of molybdenum monolayers within a given phase space of several important process parameters. The monolayers are sputtered or electron-beam deposited exclusively on LPCVD silicon nitride coated silicon wafers. In our current bilayer process, molybdenum is electron-beam deposited at high wafer temperatures in excess of 500 degrees C. Identifying process parameters that yield high quality bilayers at a significantly lower temperature will increase options for incorporating process-sensitive auxiliary array components (AAC) such as array heat sinking and electrical interconnects into our overall device process. We are currently developing two competing technical approaches for heat sinking large compact TES microcalorimeter arrays. Our efforts to improve array heat sinking and mitigate thermal cross-talk between pixels include copper backside deposition on completed device chips and copper-filled micro-trenches surface-machined into wafers. In addition, we fabricated prototypes of copper through-wafer microvias as a potential way to read out the arrays. We present an overview on the results of our molybdenum monolayer study and its implications concerning our device fabrication. We discuss the design, fabrication process, and recent test results of our AAC development.
Oak Ridge Reservation. Physical Characteristics and National Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parr, Patricia Dreyer; Joan, F. Hughes
The topology, geology, hydrology, vegetation, and wildlife of the Oak Ridge Reservation (ORR) provide a complex and intricate array of resources that directly impact land stewardship and use decisions. The purpose of this document is to consolidate general information regarding the natural resources and physical characteristics of the ORR.
Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research
Dandass, Yoginder S; Burgess, Shane C; Lawrence, Mark; Bridges, Susan M
2008-01-01
Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA) devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM) is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences). Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM) resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation. PMID:18412963
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, H; UT Southwestern Medical Center, Dallas, TX; Hilts, M
Purpose: To commission a multislice computed tomography (CT) scanner for fast and reliable readout of radiation therapy (RT) dose distributions using CT polymer gel dosimetry (PGD). Methods: Commissioning was performed for a 16-slice CT scanner using images acquired through a 1L cylinder filled with water. Additional images were collected using a single slice machine for comparison purposes. The variability in CT number associated with the anode heel effect was evaluated and used to define a new slice-by-slice background image subtraction technique. Image quality was assessed for the multislice system by comparing image noise and uniformity to that of the singlemore » slice machine. The consistency in CT number across slices acquired simultaneously using the multislice detector array was also evaluated. Finally, the variability in CT number due to increasing x-ray tube load was measured for the multislice scanner and compared to the tube load effects observed on the single slice machine. Results: Slice-by-slice background subtraction effectively removes the variability in CT number across images acquired simultaneously using the multislice scanner and is the recommended background subtraction method when using a multislice CT system. Image quality for the multislice machine was found to be comparable to that of the single slice scanner. Further study showed CT number was consistent across image slices acquired simultaneously using the multislice detector array for each detector configuration of the slice thickness examined. In addition, the multislice system was found to eliminate variations in CT number due to increasing x-ray tube load and reduce scanning time by a factor of 4 when compared to imaging a large volume using a single slice scanner. Conclusion: A multislice CT scanner has been commissioning for CT PGD, allowing images of an entire dose distribution to be acquired in a matter of minutes. Funding support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC)« less
NASA Astrophysics Data System (ADS)
Naessens, Kris; Van Hove, An; Coosemans, Thierry; Verstuyft, Steven; Ottevaere, Heidi; Vanwassenhove, Luc; Van Daele, Peter; Baets, Roel G.
2000-06-01
Laser ablation is extremely well suited for rapid prototyping and proves to be a versatile technique delivering high accuracy dimensioning and repeatability of features in a wide diversity of materials. In this paper, we present laser ablation as a fabrication method for micro machining in of arrays consisting of precisely dimensioned U-grooves in dedicated polycarbonate and polymethylmetacrylate plates. The dependency of the performance on various parameters is discussed. The fabricated plates are used to hold optical fibers by means of a UV-curable adhesive. Stacking and gluing of the plates allows the assembly of a 2D connector of plastic optical fibers for short distance optical interconnects.
Large Area MEMS Based Ultrasound Device for Cancer Detection.
Wodnicki, Robert; Thomenius, Kai; Hooi, Fong Ming; Sinha, Sumedha P; Carson, Paul L; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles
2011-08-21
We present image results obtained using a prototype ultrasound array which demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micro-Machined Ultrasound Transducers (cMUTs) which have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 um and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to production PZT probes, however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.
Sanges, Remo; Cordero, Francesca; Calogero, Raffaele A
2007-12-15
OneChannelGUI is an add-on Bioconductor package providing a new set of functions extending the capability of the affylmGUI package. This library provides a graphical interface (GUI) for Bioconductor libraries to be used for quality control, normalization, filtering, statistical validation and data mining for single channel microarrays. Affymetrix 3' expression (IVT) arrays as well as the new whole transcript expression arrays, i.e. gene/exon 1.0 ST, are actually implemented. oneChannelGUI is available for most platforms on which R runs, i.e. Windows and Unix-like machines. http://www.bioconductor.org/packages/2.0/bioc/html/oneChannelGUI.html
Prospects and features of robotics in russian crop farming
NASA Astrophysics Data System (ADS)
Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.
2017-01-01
Specificity of agriculture, low levels of technical and technological, information and communication, human resources and managerial capacities of small and medium Russian agricultural producers explain the slow pace of implementation of robotics in plant breeding. Existing models are characterized by low levels of speech understanding technologies, the creation of modern power supplies, bionic systems and the use of micro-robots. Serial production of robotics for agriculture will replace human labor in the future. Also, it will help to solve the problem of hunger, reduce environmental damage and reduce the consumption of non-renewable resources. Creating and using robotics should be based on the generated System of machines and technologies for the perfect machine-tractor fleet.
Rayport, Jeffrey F; Jaworski, Bernard J
2004-12-01
Most companies serve customers through a broad array of interfaces, from retail sales clerks to Web sites to voice-response telephone systems. But while the typical company has an impressive interface collection, it doesn't have an interface system. That is, the whole set does not add up to the sum of its parts in its ability to provide service and build customer relationships. Too many people and too many machines operating with insufficient coordination (and often at cross-purposes) mean rising complexity, costs, and customer dissatisfaction. In a world where companies compete not on what they sell but on how they sell it, turning that liability into an asset is what separates winners from losers. In this adaptation of their forthcoming book by the same title, Jeffrey Rayport and Bernard Jaworski explain how companies must reengineer their customer interface systems for optimal efficiency and effectiveness. Part of that transformation, they observe, will involve a steady encroachment by machine interfaces into areas that have long been the sacred province of humans. Managers now have opportunities unprecedented in the history of business to use machines, not just people, to credibly manage their interactions with customers. Because people and machines each have their strengths and weaknesses, company executives must identify what people do best, what machines do best, and how to deploy them separately and together. Front-office reengineering subjects every current and potential service interface to an analysis of opportunities for substitution (using machines instead of people), complementarity (using a mix of machines and people), and displacement (using networks to shift physical locations of people and machines), with the twin objectives of compressing costs and driving top-line growth through increased customer value.
Requirements for fault-tolerant factoring on an atom-optics quantum computer.
Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae
2013-01-01
Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.
30 Years of radiotherapy service in Southern Thailand: workload vs resources.
Phungrassami, Temsak; Funsian, Amporn; Sriplung, Hutcha
2013-01-01
To study the pattern of patient load, personnel and equipment resources from 30-years experience in Southern Thailand. This retrospective study collected secondary data from the Division of Therapeutic Radiology and Oncology and the Songklanagarind Hospital Tumor Registry database, Faculty of Medicine, Prince of Songkla University, during the period of 1982-2012. The number of new patients who had radiation treatment gradually increased from 121 in 1982 to 2,178 in 2011. Shortages of all kinds of personnel were demonstrated as compared to the recommendations, especially in radiotherapy technicians. In 2011, Southern Thailand, with two radiotherapy centers, had 0.44 megavoltage radiotherapy machines (cobalt or linear accelerator) per million of population. This number is suboptimal, but could be managed cost-effectively by prolonging machine operating times during personnel shortages. This study identified a discrepancy between workload and resources in one medical school radiotherapy center in.
Evaluation of surface water resources from machine-processing of ERTS multispectral data
NASA Technical Reports Server (NTRS)
Mausel, P. W.; Todd, W. J.; Baumgardner, M. F.; Mitchell, R. A.; Cook, J. P.
1976-01-01
The surface water resources of a large metropolitan area, Marion County (Indianapolis), Indiana, are studied in order to assess the potential value of ERTS spectral analysis to water resources problems. The results of the research indicate that all surface water bodies over 0.5 ha were identified accurately from ERTS multispectral analysis. Five distinct classes of water were identified and correlated with parameters which included: degree of water siltiness; depth of water; presence of macro and micro biotic forms in the water; and presence of various chemical concentrations in the water. The machine processing of ERTS spectral data used alone or in conjunction with conventional sources of hydrological information can lead to the monitoring of area of surface water bodies; estimated volume of selected surface water bodies; differences in degree of silt and clay suspended in water and degree of water eutrophication related to chemical concentrations.
Resource Guide for Persons with Speech or Language Impairments.
ERIC Educational Resources Information Center
IBM, Atlanta, GA. National Support Center for Persons with Disabilities.
The resource guide identifies products which assist speech or language impaired individuals in accessing IBM (International Business Machine) Personal Computers or the IBM Personal System/2 family of products. An introduction provides a general overview of ways computers can help persons with speech or language handicaps. The document then…
Educators Resource Guide to WP Material for the Classroom.
ERIC Educational Resources Information Center
Potts, Peggy J.
This guide lists materials to be used in the classroom instruction of word processing technology. A listing of international, national, and regional word processing associations is followed by an annotated enumeration of resources under nine headings: (1) booklets and brochures, (2) books, (3) films, (4) handbooks, (5) machine transcription…
Passive front-ends for wideband millimeter wave electronic warfare
NASA Astrophysics Data System (ADS)
Jastram, Nathan Joseph
This thesis presents the analysis, design and measurements of novel passive front ends of interest to millimeter wave electronic warfare systems. However, emerging threats in the millimeter waves (18 GHz and above) has led to a push for new systems capable of addressing these threats. At these frequencies, traditional techniques of design and fabrication are challenging due to small size, limited bandwidth and losses. The use of surface micromachining technology for wideband direction finding with multiple element antenna arrays for electronic support is demonstrated. A wideband tapered slot antenna is first designed and measured as an array element for the subsequent arrays. Both 18--36 GHz and 75--110 GHz amplitude only and amplitude/phase two element direction finding front ends are designed and measured. The design of arrays using Butler matrix and Rotman lens beamformers for greater than two element direction finding over W band and beyond using is also presented. The design of a dual polarized high power capable front end for electronic attack over an 18--45 GHz band is presented. To combine two polarizations into the same radiating aperture, an orthomode transducer (OMT) based upon a new double ridge waveguide cross section is developed. To provide greater flexibility in needed performance characteristics, several different turnstile junction matching sections are tested. A modular horn section is proposed to address flexible and ever changing operational requirements, and is designed for performance criteria such as constant gain, beamwidth, etc. A multi-section branch guide coupler and low loss Rotman lens based upon the proposed cross section are also developed. Prototyping methods for the herein designed millimeter wave electronic warfare front ends are investigated. Specifically, both printed circuit board (PCB) prototyping of micromachined systems and 3D printing of conventionally machined horns are presented. A 4--8 GHz two element array with integrated beamformer fabricated using the stacking of PCB boards is shown, and measured results compare favorably with the micromachined front ends. A 3D printed small aperture horn is compared with a conventionally machined horn, and measured results show similar performance with a ten-fold reduction in cost and weight.
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines
NASA Technical Reports Server (NTRS)
Le, Martin; Zheng, Xin; Katanyoutant, Sunant
2008-01-01
Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.
NASA Astrophysics Data System (ADS)
Chen, Shun-Tong; Chang, Chih-Hsien
2013-12-01
This study presents a novel approach to the fabrication of a biomedical-mold for producing convex platform PMMA (poly-methyl-meth-acrylate) slides for counting cells. These slides allow for the microscopic examination of urine sediment cells. Manufacturing of such slides incorporates three important procedures: (1) the development of a tabletop high-precision dual-spindle CNC (computerized numerical control) machine tool; (2) the formation of a boron-doped polycrystalline composite diamond (BD-PCD) wheel-tool on the machine tool developed in procedure (1); and (3) the cutting of a multi-groove-biomedical-mold array using the formed diamond wheel-tool in situ on the developed machine. The machine incorporates a hybrid working platform providing wheel-tool thinning using spark erosion to cut, polish, and deburr microgrooves on NAK80 steel directly. With consideration given for the electrical conductive properties of BD-PCD, the diamond wheel-tool is thinned to a thickness of 5 µm by rotary wire electrical discharge machining. The thinned wheel-tool can grind microgrooves 10 µm wide. An embedded design, which inserts a close fitting precision core into the biomedical-mold to create step-difference (concave inward) of 50 µm in height between the core and the mold, is also proposed and realized. The perpendicular dual-spindles and precision rotary stage are features that allow for biomedical-mold machining without the necessity of uploading and repositioning materials until all tasks are completed. A PMMA biomedical-slide with a plurality of juxtaposed counting chambers is formed and its usefulness verified.
Optimal expression evaluation for data parallel architectures
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
A data parallel machine represents an array or other composite data structure by allocating one processor (at least conceptually) per data item. A pointwise operation can be performed between two such arrays in unit time, provided their corresponding elements are allocated in the same processors. If the arrays are not aligned in this fashion, the cost of moving one or both of them is part of the cost of the operation. The choice of where to perform the operation then affects this cost. If an expression with several operands is to be evaluated, there may be many choices of where to perform the intermediate operations. An efficient algorithm is given to find the minimum-cost way to evaluate an expression, for several different data parallel architectures. This algorithm applies to any architecture in which the metric describing the cost of moving an array is robust. This encompasses most of the common data parallel communication architectures, including meshes of arbitrary dimension and hypercubes. Remarks are made on several variations of the problem, some of which are solved and some of which remain open.
Atoche, Alejandro Castillo; Castillo, Javier Vázquez
2012-01-01
A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964
Lessons from Cotton: Research Projects Following Development of a Community-based Genotyping Array
USDA-ARS?s Scientific Manuscript database
High-throughput, cost-effective genotyping arrays provide a standardized resource for plant breeding communities that can be used for a wide range of applications at a suitable pace for integrating pertinent information into breeding programs. Traditionally, crop research communities will target dev...
ERIC Educational Resources Information Center
Texas State Technical Coll. System, Waco.
This package consists of course syllabi, an instructor's handbook, and a student laboratory manual for a 2-year vocational training program to prepare students for entry-level employment in computer-aided drafting and design in the machine tool industry. The program was developed through a modification of the DACUM (Developing a Curriculum)…
Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines
2014-11-01
architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
During the last years, several Grid computing centres chose virtualization as a better way to manage diverse use cases with self-consistent environments on the same bare infrastructure. The maturity of control interfaces (such as OpenNebula and OpenStack) opened the possibility to easily change the amount of resources assigned to each use case by simply turning on and off virtual machines. Some of those private clouds use, in production, copies of the Virtual Analysis Facility, a fully virtualized and self-contained batch analysis cluster capable of expanding and shrinking automatically upon need: however, resources starvation occurs frequently as expansion has to compete with other virtual machines running long-living batch jobs. Such batch nodes cannot relinquish their resources in a timely fashion: the more jobs they run, the longer it takes to drain them and shut off, and making one-job virtual machines introduces a non-negligible virtualization overhead. By improving several components of the Virtual Analysis Facility we have realized an experimental “Docked” Analysis Facility for ALICE, which leverages containers instead of virtual machines for providing performance and security isolation. We will present the techniques we have used to address practical problems, such as software provisioning through CVMFS, as well as our considerations on the maturity of containers for High Performance Computing. As the abstraction layer is thinner, our Docked Analysis Facilities may feature a more fine-grained sizing, down to single-job node containers: we will show how this approach will positively impact automatic cluster resizing by deploying lightweight pilot containers instead of replacing central queue polls.
NASA Technical Reports Server (NTRS)
Menkin, Evgeny; Juillerat, Robert
2015-01-01
With the International Space Station Program transition from assembly to utilization, focus has been placed on the optimization of essential resources. This includes resources both resupplied from the ground and also resources produced by the ISS. In an effort to improve the use of two of these, the ISS Engineering teams, led by the ISS Program Systems Engineering and Integration Office, undertook an effort to modify the techniques use to perform several key on-orbit events. The primary purposes of this endeavor was to make the ISS more efficient in the use of the Russian-supplied fuel for the propulsive attitude control system and also to minimize the impacts to available ISS power due to the positioning of the ISS solar arrays. Because the ISS solar arrays are sensitive to several factors that are present when propulsive attitude control is used, they must be operated in a manner to protect them from damage. This results in periods of time where the arrays must be positioned, rather than autonomously tracking the sun, resulting in negative impacts to power generated by the solar arrays and consumed by both the ISS core systems and payload customers. A reduction in the number and extent of the events each year that require the ISS to use propulsive attitude control simultaneously accomplishes both these goals. Each instance where the ISS solar arrays normal sun tracking mode must be interrupted represent a need for some level of powerdown of equipment. As the magnitude of payload power requirements increases, and the efficiency of the ISS solar arrays decreases, these powerdowns caused by array positioning, will likely become more significant and could begin to negatively impact the payload operations. Through efforts such as this, the total number of events each year that require positioning of the arrays to unfavorable positions for power generation, in order to protect them against other constraints, are reduced. Optimization of propulsive events and transitioning some of them to non-propulsive CMG control significantly reduces propellant usage on the ISS leading to the reduction of the propellant delivery requirement. This results in move available upmass that can be used for delivering critical dry cargo, additional water, air, crew supplies and science experiments.
Developing infrared array controller with software real time operating system
NASA Astrophysics Data System (ADS)
Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu
2008-07-01
Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Wang, R.; Secunde, R.
1992-01-01
A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.
Coexistence and limiting similarity of consumer species competing for a linear array of resources.
Abrams, Peter A; Rueffler, Claus
2009-03-01
Consumer-resource systems with linear arrays of substitutable resources form the conceptual basis of much of present-day competition theory. However, most analyses of the limiting similarity of competitors have only employed consumer-resource models as a justification for using the Lotka-Volterra competition equations to represent the interaction. Unfortunately, Lotka-Volterra models cannot reflect resource exclusion via apparent competition and are poor approximations of systems with nonlogistic resource growth. We use consumer-resource models to examine the impact of exclusion of biotic resources or depletion of abiotic resources on the ability of three consumer species to coexist along a one-dimensional resource axis. For a wide range of consumer-resource models, coexistence conditions can become more restrictive with increasing niche separation of the two outer species. This occurs when the outer species are highly efficient; in this case they cause extinction or severe depletion of intermediate resources when their own niches have an intermediate level of separation. In many cases coexistence of an intermediate consumer species is prohibited when niche separation of the two outer species is moderately large, but not when it is small. Coexistence may be most likely when the intermediate species is closer to one of the two outer species, contrary to previous theory. These results suggest that competition may lead to uneven spacing of utilization curves. The implications and range of applicability of the models are discussed.
Turning a blind eye: the mobilization of radiology services in resource-poor regions
2010-01-01
While primary care, obstetrical, and surgical services have started to expand in the world's poorest regions, there is only sparse literature on the essential support systems that are required to make these operations function. Diagnostic imaging is critical to effective rural healthcare delivery, yet it has been severely neglected by the academic, public, and private sectors. Currently, a large portion of the world's population lacks access to any form of diagnostic imaging. In this paper we argue that two primary imaging modalities--diagnostic ultrasound and X-Ray--are ideal for rural healthcare services and should be scaled-up in a rapid and standardized manner. Such machines, if designed for resource-poor settings, should a) be robust in harsh environmental conditions, b) function reliably in environments with unstable electricity, c) minimize radiation dangers to staff and patients, d) be operable by non-specialist providers, and e) produce high-quality images required for accurate diagnosis. Few manufacturers are producing ultrasound and X-Ray machines that meet the specifications needed for rural healthcare delivery in resource-poor regions. A coordinated effort is required to create demand sufficient for manufacturers to produce the desired machines and to ensure that the programs operating them are safe, effective, and financially feasible. PMID:20946643
Turning a blind eye: the mobilization of radiology services in resource-poor regions.
Maru, Duncan Smith-Rohrberg; Schwarz, Ryan; Jason, Andrews; Basu, Sanjay; Sharma, Aditya; Moore, Christopher
2010-10-14
While primary care, obstetrical, and surgical services have started to expand in the world's poorest regions, there is only sparse literature on the essential support systems that are required to make these operations function. Diagnostic imaging is critical to effective rural healthcare delivery, yet it has been severely neglected by the academic, public, and private sectors. Currently, a large portion of the world's population lacks access to any form of diagnostic imaging. In this paper we argue that two primary imaging modalities--diagnostic ultrasound and X-Ray--are ideal for rural healthcare services and should be scaled-up in a rapid and standardized manner. Such machines, if designed for resource-poor settings, should a) be robust in harsh environmental conditions, b) function reliably in environments with unstable electricity, c) minimize radiation dangers to staff and patients, d) be operable by non-specialist providers, and e) produce high-quality images required for accurate diagnosis. Few manufacturers are producing ultrasound and X-Ray machines that meet the specifications needed for rural healthcare delivery in resource-poor regions. A coordinated effort is required to create demand sufficient for manufacturers to produce the desired machines and to ensure that the programs operating them are safe, effective, and financially feasible.
NASA Astrophysics Data System (ADS)
Veronesi, F.; Grassi, S.
2016-09-01
Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.
Calibrated thermal microscopy of the tool-chip interface in machining
NASA Astrophysics Data System (ADS)
Yoon, Howard W.; Davies, Matthew A.; Burns, Timothy J.; Kennedy, M. D.
2000-03-01
A critical parameter in predicting tool wear during machining and in accurate computer simulations of machining is the spatially-resolved temperature at the tool-chip interface. We describe the development and the calibration of a nearly diffraction-limited thermal-imaging microscope to measure the spatially-resolved temperatures during the machining of an AISI 1045 steel with a tungsten-carbide tool bit. The microscope has a target area of 0.5 mm X 0.5 mm square region with a < 5 micrometers spatial resolution and is based on a commercial InSb 128 X 128 focal plane array with an all reflective microscope objective. The minimum frame image acquisition time is < 1 ms. The microscope is calibrated using a standard blackbody source from the radiance temperature calibration laboratory at the National Institute of Standards and Technology, and the emissivity of the machined material is deduced from the infrared reflectivity measurements. The steady-state thermal images from the machining of 1045 steel are compared to previous determinations of tool temperatures from micro-hardness measurements and are found to be in agreement with those studies. The measured average chip temperatures are also in agreement with the temperature rise estimated from energy balance considerations. From these calculations and the agreement between the experimental and the calculated determinations of the emissivity of the 1045 steel, the standard uncertainty of the temperature measurements is estimated to be about 45 degree(s)C at 900 degree(s)C.
Methodology for creating dedicated machine and algorithm on sunflower counting
NASA Astrophysics Data System (ADS)
Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand
2007-09-01
In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.
The Next Era: Deep Learning in Pharmaceutical Research
Ekins, Sean
2016-01-01
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule’s properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique. PMID:27599991
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... Conservation Area (NCA), addressed in the September 2008 Resource Management Plan (RMP) and Record of Decision... an array of management actions designed to conserve natural and cultural resources on BLM... cultural resources within the NCA as described in the NCA Management Plan; to allow for safe public...
Object Orientated Simulation on Transputer Arrays Using Time Warp
1989-12-01
Transputer based Machines, Grenoble, Sept 14-16 1987, Ed. Traian Muntean. [ 3 ] Muntean T., "PARX operating system kernal; application to Minix ", Esprit P1085...Simulation 3 Time Warp Simulation 8 3.1 Rollback Mechanism ........ ............................. 8 3.2 Simulation Outp,,t...23 4.3.* Importan Noc .......... ............................ 23 5 Low Level Operations 24 • 3 IIiI 5.1 Global Virtual Timne Estimiation
Permanent magnet edge-field quadrupole
Tatchyn, R.O.
1997-01-21
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis. 10 figs.
Permanent magnet edge-field quadrupole
Tatchyn, Roman O.
1997-01-01
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis.
Man-Machine Impact of Technology on Coast Guard Missions and Systems
1979-12-01
t Cost of Rar~dom, Acce~ss eoy~mAlr 97 f-Al 1000 MOS RAM-(409 BITS/CHIP) . 100 _ I• z LLJ 10 (I) UI 1.04 I.-I- ’ YEAR ii A .. I. FiueA-.oecs pedo ...of these advances will iTOSt likely be accomplished through focal plane arrays of detectors, charge coupled device readout techniques for the video
Sputter coating of microspherical substrates by levitation
Lowe, A.T.; Hosford, C.D.
Microspheres are substantially uniformly coated with metals or nonmetals by simltaneously levitating them and sputter coating them at total chamber pressures less than 1 torr. A collimated hole structure comprising a parallel array of upwardly projecting individual gas outlets is machined out to form a dimple. Glass microballoons,, which are particularly useful in laser fusion applications, can be substantially uniformly coated using the coating method and apparatus.
LHCb experience with running jobs in virtual machines
NASA Astrophysics Data System (ADS)
McNab, A.; Stagni, F.; Luzzi, C.
2015-12-01
The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Matthews, Bryan L.
2011-01-01
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-peruse packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia1 which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
Breadboard linear array scan imager using LSI solid-state technology
NASA Technical Reports Server (NTRS)
Tracy, R. A.; Brennan, J. A.; Frankel, D. G.; Noll, R. E.
1976-01-01
The performance of large scale integration photodiode arrays in a linear array scan (pushbroom) breadboard was evaluated for application to multispectral remote sensing of the earth's resources. The technical approach, implementation, and test results of the program are described. Several self scanned linear array visible photodetector focal plane arrays were fabricated and evaluated in an optical bench configuration. A 1728-detector array operating in four bands (0.5 - 1.1 micrometer) was evaluated for noise, spectral response, dynamic range, crosstalk, MTF, noise equivalent irradiance, linearity, and image quality. Other results include image artifact data, temporal characteristics, radiometric accuracy, calibration experience, chip alignment, and array fabrication experience. Special studies and experimentation were included in long array fabrication and real-time image processing for low-cost ground stations, including the use of computer image processing. High quality images were produced and all objectives of the program were attained.
Novel method for fabrication of monolithic multi-cavity molds and wafer optics
NASA Astrophysics Data System (ADS)
Wielandts, Marc; Wielandts, Remi
2015-10-01
One lens at a time on axis diamond turning or grinding of lens arrays with a large number of lenses is conventionally impractical because of the difficulties to shift and balance the substrate for each lens position. A novel method for automatic indexing was developed. This method uses an innovative mechatronics tooling (patent pending) that allows dynamic indexing at constant work spindle speed for maximum productivity and thermal stability of the work spindle while the balancing condition is maintained. In this paper we shall compare the machining capabilities of this method to free-form machining techniques, discuss about the main issues, present the concept and design of the working prototype and specific test bed, and present the results of the first cutting tests.
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
An overview of the situation in radiotherapy with emphasis on the developing countries.
Hanson, G P; Stjernswärd, J; Nofal, M; Durosinmi-Etti, F
1990-11-01
Radiotherapy services are closely linked to the level of medical care which, in turn, is an important component of the overall health care program, with its development related to social, economic, and educational factors. As a basis for understanding the situation regarding adequate coverage of the population by radiotherapy services, general information about the world population (currently 5 billion), age distribution, frequency of cancer occurrence, and causes of death is presented. For an appreciation of the obstacles that must be overcome, the situation with regard to Gross National Product (GNP), transfer of economic resources, and per capita expenditures for health services is shown. For example, in the developing world, most countries spend less than 5% of their GNP for health, and on a macro scale at least 20 billion U.S. dollars per year are being transferred from the poor nations of the southern hemisphere to the northern hemisphere. Information about the wide range of population coverage with radiotherapy resources and the trend regarding high-energy radiotherapy machines is presented. For example, in North America (USA) there are six high-energy machines for each one million persons, and each machine is used to treat about 230 new patients per year. In other parts of the world, such as large areas of Africa and South-East Asia, there may only be one high-energy radiotherapy machine for 20 to 40 million people, and one machine may be used to treat more than 600 new patients per year. Many cancer patients have no access to radiotherapy services. When estimates of the need for radiotherapy services in the developing world as a consequence of cancer incidence are compared with the current health expenditures, it is concluded that a combined effort of national authorities, donor and financial institutions, professional and scientific societies, and international organizations is required. The knowledge, skills, and technology are available in many excellent radiotherapy centers throughout the world. The key issues are priority and the commitment of sufficient resources.
The Frictionless Data Package: Data Containerization for Automated Scientific Workflows
NASA Astrophysics Data System (ADS)
Shepherd, A.; Fils, D.; Kinkade, D.; Saito, M. A.
2017-12-01
As cross-disciplinary geoscience research increasingly relies on machines to discover and access data, one of the critical questions facing data repositories is how data and supporting materials should be packaged for consumption. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. In attempts to shorten the time to science and access data resources across may disciplines, expectations for machines to mediate the process of discovery and access is challenging data repository infrastructure. This challenge is to find ways to deliver data and information in ways that enable machines to make better decisions by enabling them to understand the data and metadata of many data types. Additionally, once machines have recommended a data resource as relevant to an investigator's needs, the data resource should be easy to integrate into that investigator's toolkits for analysis and visualization. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) supports NSF-funded OCE and PLR investigators with their project's data management needs. These needs involve a number of varying data types some of which require multiple files with differing formats. Presently, BCO-DMO has described these data types and the important relationships between the type's data files through human-readable documentation on web pages. For machines directly accessing data files from BCO-DMO, this documentation could be overlooked and lead to misinterpreting the data. Instead, BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization.
Exploring the potential of machine learning to break deadlock in convection parameterization
NASA Astrophysics Data System (ADS)
Pritchard, M. S.; Gentine, P.
2017-12-01
We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
2016-08-09
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718
NASA Astrophysics Data System (ADS)
Karidkar, S. S.; Dabade, U. A.
2016-02-01
Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.
NASA Astrophysics Data System (ADS)
Ghani, Jaharah A.; Mohd Rodzi, Mohd Nor Azmi; Zaki Nuawi, Mohd; Othman, Kamal; Rahman, Mohd. Nizam Ab.; Haron, Che Hassan Che; Deros, Baba Md
2011-01-01
Machining is one of the most important manufacturing processes in these modern industries especially for finishing an automotive component after the primary manufacturing processes such as casting and forging. In this study the turning parameters of dry cutting environment (without air, normal air and chilled air), various cutting speed, and feed rate are evaluated using a Taguchi optimization methodology. An orthogonal array L27 (313), signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to analyze the effect of these turning parameters on the performance of a coated carbide tool. The results show that the tool life is affected by the cutting speed, feed rate and cutting environment with contribution of 38%, 32% and 27% respectively. Whereas for the surface roughness, the feed rate is significantly controlled the machined surface produced by 77%, followed by the cutting environment of 19%. The cutting speed is found insignificant in controlling the machined surface produced. The study shows that the dry cutting environment factor should be considered in order to produce longer tool life as well as for obtaining a good machined surface.
NASA Astrophysics Data System (ADS)
Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan
2018-03-01
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
Humanizing machines: Anthropomorphization of slot machines increases gambling.
Riva, Paolo; Sacchi, Simona; Brambilla, Marco
2015-12-01
Do people gamble more on slot machines if they think that they are playing against humanlike minds rather than mathematical algorithms? Research has shown that people have a strong cognitive tendency to imbue humanlike mental states to nonhuman entities (i.e., anthropomorphism). The present research tested whether anthropomorphizing slot machines would increase gambling. Four studies manipulated slot machine anthropomorphization and found that exposing people to an anthropomorphized description of a slot machine increased gambling behavior and reduced gambling outcomes. Such findings emerged using tasks that focused on gambling behavior (Studies 1 to 3) as well as in experimental paradigms that included gambling outcomes (Studies 2 to 4). We found that gambling outcomes decrease because participants primed with the anthropomorphic slot machine gambled more (Study 4). Furthermore, we found that high-arousal positive emotions (e.g., feeling excited) played a role in the effect of anthropomorphism on gambling behavior (Studies 3 and 4). Our research indicates that the psychological process of gambling-machine anthropomorphism can be advantageous for the gaming industry; however, this may come at great expense for gamblers' (and their families') economic resources and psychological well-being. (c) 2015 APA, all rights reserved).
iSDS: a self-configurable software-defined storage system for enterprise
NASA Astrophysics Data System (ADS)
Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen
2018-01-01
Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
NORMATIVE SCIENCE: A CORRUPTING INFLUENCE IN ECOLOGICAL AND NATURAL RESOURCE POLICY
Effectively resolving the typical ecological or natural resource policy issue requires providing an array of scientific information to decision-makers. The ability of scientists (and scientific information) to constructively inform ecological policy deliberations has been dimi...
Pyroelectric IR sensor arrays for fall detection in the older population
NASA Astrophysics Data System (ADS)
Sixsmith, A.; Johnson, N.; Whatmore, R.
2005-09-01
Uncooled pyroelectric sensor arrays have been studied over many years for their uses in thermal imaging applications. These arrays will only detect changes in IR flux and so systems based upon them are very good at detecting movements of people in the scene without sensing the background, if they are used in staring mode. Relatively-low element count arrays (16 x 16) can be used for a variety of people sensing applications, including people counting (for safety applications), queue monitoring etc. With appropriate signal processing such systems can be also be used for the detection of particular events such as a person falling over. There is a considerable need for automatic fall detection amongst older people, but there are important limitations to some of the current and emerging technologies available for this. Simple sensors, such as 1 or 2 element pyroelectric infra-red sensors provide crude data that is difficult to interpret; the use of devices worn on the person, such as wrist communicator and motion detectors have potential, but are reliant on the person being able and willing to wear the device; video cameras may be seen as intrusive and require considerable human resources to monitor activity while machine-interpretation of camera images is complex and may be difficult in this application area. The use of a pyroelectric thermal array sensor was seen to have a number of potential benefits. The sensor is wall-mounted and does not require the user to wear a device. It enables detailed analysis of a subject's motion to be achieved locally, within the detector, using only a modest processor. This is possible due to the relative ease with which data from the sensor can be interpreted relative to the data generated by alternative sensors such as video devices. In addition to the cost-effectiveness of this solution, it was felt that the lack of detail in the low-level data, together with the elimination of the need to transmit data outside the detector, would help to avert feelings intrusiveness on the part of the end-user.The main benefits of this type of technology would be for older people who spend time alone in unsupervised environments. This would include people living alone in ordinary housing or in sheltered accommodation (apartment complexes for older people with local warden) and non-communal areas in residential/nursing home environments (e.g. bedrooms and ensuite bathrooms and toilets). This paper will review the development of the array, the pyroelectric ceramic material upon which it is based and the system capabilities. It will present results from the Framework 5 SIMBAD project, which used the system to monitor the movements of elderly people over a considerable period of time.
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2002-12-19
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
SLURM: Simplex Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Grondona, M
2003-04-22
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling, and stream copy modules. This paper presents an overview of the SLURM architecture and functionality.
Parents, Are You Aware of the Commercial Activity in Your School? You Should Be.
ERIC Educational Resources Information Center
Molnar, Alex
2003-01-01
Explains that financially strapped and resource-poor schools often accept corporate-sponsored educational materials and ads, especially when they come with free computers or other resources, discussing how corporations use schools to boost brand loyalty; how commercialism undermines the health of students (e.g., soda machines in schools, which…
ERDC MSRC (Major Shared Resource Center) Resource. Spring 2008
2008-01-01
obtained from ADCIRC results. The alpha test was performed on the Cray XT3 machine (Sapphire) at ERDC and the IBM P575+ system ( Babbage ) at the...2008 20 Scotty Swillie (center) and Charles Ray (far right) were part of the team that constructed the DoD HPCMP booth for the Conference (From
Utilization and Monetization of Healthcare Data in Developing Countries.
Bram, Joshua T; Warwick-Clark, Boyd; Obeysekare, Eric; Mehta, Khanjan
2015-06-01
In developing countries with fledgling healthcare systems, the efficient deployment of scarce resources is paramount. Comprehensive community health data and machine learning techniques can optimize the allocation of resources to areas, epidemics, or populations most in need of medical aid or services. However, reliable data collection in low-resource settings is challenging due to a wide range of contextual, business-related, communication, and technological factors. Community health workers (CHWs) are trusted community members who deliver basic health education and services to their friends and neighbors. While an increasing number of programs leverage CHWs for last mile data collection, a fundamental challenge to such programs is the lack of tangible incentives for the CHWs. This article describes potential applications of health data in developing countries and reviews the challenges to reliable data collection. Four practical CHW-centric business models that provide incentive and accountability structures to facilitate data collection are presented. Creating and strengthening the data collection infrastructure is a prerequisite for big data scientists, machine learning experts, and public health administrators to ultimately elevate and transform healthcare systems in resource-poor settings.
Utilization and Monetization of Healthcare Data in Developing Countries
Bram, Joshua T.; Warwick-Clark, Boyd; Obeysekare, Eric; Mehta, Khanjan
2015-01-01
Abstract In developing countries with fledgling healthcare systems, the efficient deployment of scarce resources is paramount. Comprehensive community health data and machine learning techniques can optimize the allocation of resources to areas, epidemics, or populations most in need of medical aid or services. However, reliable data collection in low-resource settings is challenging due to a wide range of contextual, business-related, communication, and technological factors. Community health workers (CHWs) are trusted community members who deliver basic health education and services to their friends and neighbors. While an increasing number of programs leverage CHWs for last mile data collection, a fundamental challenge to such programs is the lack of tangible incentives for the CHWs. This article describes potential applications of health data in developing countries and reviews the challenges to reliable data collection. Four practical CHW-centric business models that provide incentive and accountability structures to facilitate data collection are presented. Creating and strengthening the data collection infrastructure is a prerequisite for big data scientists, machine learning experts, and public health administrators to ultimately elevate and transform healthcare systems in resource-poor settings. PMID:26487984
A Review of Extra-Terrestrial Mining Robot Concepts
NASA Technical Reports Server (NTRS)
Mueller, Robert P.; Van Susante, Paul J.
2011-01-01
Outer space contains a vast amount of resources that offer virtually unlimited wealth to the humans that can access and use them for commercial purposes. One of the key technologies for harvesting these resources is robotic mining of regolith, minerals, ices and metals. The harsh environment and vast distances create challenges that are handled best by robotic machines working in collaboration with human explorers. Humans will benefit from the resources that will be mined by robots. They will visit outposts and mining camps as required for exploration, commerce and scientific research, but a continuous presence is most likely to be provided by robotic mining machines that are remotely controlled by humans. There have been a variety of extra-terrestrial robotic mining concepts proposed over the last 100 years and this paper will attempt to summarize and review concepts in the public domain (government, industry and academia) to serve as an informational resource for future mining robot developers and operators. The challenges associated with these concepts will be discussed and feasibility will be assessed. Future needs associated with commercial efforts will also be investigated.
A Review of Extra-Terrestrial Mining Concepts
NASA Technical Reports Server (NTRS)
Mueller, R. P.; van Susante, P. J.
2012-01-01
Outer space contains a vast amount of resources that offer virtually unlimited wealth to the humans that can access and use them for commercial purposes. One of the key technologies for harvesting these resources is robotic mining of regolith, minerals, ices and metals. The harsh environment and vast distances create challenges that are handled best by robotic machines working in collaboration with human explorers. Humans will benefit from the resources that will be mined by robots. They will visit outposts and mining camps as required for exploration, commerce and scientific research, but a continuous presence is most likely to be provided by robotic mining machines that are remotely controlled by humans. There have been a variety of extra-terrestrial robotic mining concepts proposed over the last 40 years and this paper will attempt to summarize and review concepts in the public domain (government, industry and academia) to serve as an informational resource for future mining robot developers and operators. The challenges associated with these concepts will be discussed and feasibility will be assessed. Future needs associated with commercial efforts will also be investigated.
Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela
2011-06-01
The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.
& Simulation Research Interests Remote Sensing Natural Resource Modeling Machine Learning Education Analysis Center. Areas of Expertise Geospatial Analysis Data Visualization Algorithm Development Modeling
NASA Astrophysics Data System (ADS)
Robeck, E.; Camphire, G.; Brendan, S.; Celia, T.
2016-12-01
There exists a wide array of high quality resources to support K-12 teaching and motivate student interest in the geosciences. Yet, connecting teachers to those resources can be a challenge. Teachers working to implement the NGSS can benefit from accessing the wide range of existing geoscience resources, and from becoming part of supportive networks of geoscience educators, researchers, and advocates. Engaging teachers in such networks can be facilitated by providing them with information about organizations, resources, and opportunities. The American Geoscience Institute (AGI) has developed two key resources that have great value in supporting NGSS implement in these ways. Those are Earth Science Week, and the Education Resources Network in AGI's Center for Geoscience and Society. For almost twenty years, Earth Science Week, has been AGI's premier annual outreach program designed to celebrate the geosciences. Through its extensive web-based resources, as well as the physical kits of posters, DVDs, calendars and other printed materials, Earth Science Week offers an array of resources and opportunities to connect with the education-focused work of important geoscience organizations such as NASA, the National Park Service, HHMI, esri, and many others. Recently, AGI has initiated a process of tagging these and other resources to NGSS so as to facilitate their use as teachers develop their instruction. Organizing Earth Science Week around themes that are compatible with topics within NGSS contributes to the overall coherence of the diverse array of materials, while also suggesting potential foci for investigations and instructional units. More recently, AGI has launched its Center for Geoscience and Society, which is designed to engage the widest range of audiences in building geoscience awareness. As part of the Center's work, it has launched the Education Resources Network (ERN), which is an extensive searchable database of all manner of resources for geoscience education. Where appropriate, the resources on the ERN are tagged to components of the NGSS making this a one-stop portal for geoscience education materials. Providers of non-commercial geoscience education resources, especially those that align with the NGSS, can contact AGI so that their materials can be added to Earth Science Week and the ERN.
NASA Astrophysics Data System (ADS)
Soltani, E.; Shahali, H.; Zarepour, H.
2011-01-01
In this paper, the effect of machining parameters, namely, lubricant emulsion percentage and tool material on surface roughness has been studied in machining process of EN-AC 48000 aluminum alloy. EN-AC 48000 aluminum alloy is an important alloy in industries. Machining of this alloy is of vital importance due to built-up edge and tool wear. A L9 Taguchi standard orthogonal array has been applied as experimental design to investigate the effect of the factors and their interaction. Nine machining tests have been carried out with three random replications resulting in 27 experiments. Three type of cutting tools including coated carbide (CD1810), uncoated carbide (H10), and polycrystalline diamond (CD10) have been used in this research. Emulsion percentage of lubricant is selected at three levels including 3%, 5% and 10%. Statistical analysis has been employed to study the effect of factors and their interactions using ANOVA method. Moreover, the optimal factors level has been achieved through signal to noise ratio (S/N) analysis. Also, a regression model has been provided to predict the surface roughness. Finally, the results of the confirmation tests have been presented to verify the adequacy of the predictive model. In this research, surface quality was improved by 9% using lubricant and statistical optimization method.
Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel
NASA Astrophysics Data System (ADS)
Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.
2018-04-01
Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.
AAA+ Machines of Protein Destruction in Mycobacteria.
Alhuwaider, Adnan Ali H; Dougan, David A
2017-01-01
The bacterial cytosol is a complex mixture of macromolecules (proteins, DNA, and RNA), which collectively are responsible for an enormous array of cellular tasks. Proteins are central to most, if not all, of these tasks and as such their maintenance (commonly referred to as protein homeostasis or proteostasis) is vital for cell survival during normal and stressful conditions. The two key aspects of protein homeostasis are, (i) the correct folding and assembly of proteins (coupled with their delivery to the correct cellular location) and (ii) the timely removal of unwanted or damaged proteins from the cell, which are performed by molecular chaperones and proteases, respectively. A major class of proteins that contribute to both of these tasks are the AAA+ (ATPases associated with a variety of cellular activities) protein superfamily. Although much is known about the structure of these machines and how they function in the model Gram-negative bacterium Escherichia coli , we are only just beginning to discover the molecular details of these machines and how they function in mycobacteria. Here we review the different AAA+ machines, that contribute to proteostasis in mycobacteria. Primarily we will focus on the recent advances in the structure and function of AAA+ proteases, the substrates they recognize and the cellular pathways they control. Finally, we will discuss the recent developments related to these machines as novel drug targets.
Wide-bandwidth high-resolution search for extraterrestrial intelligence
NASA Technical Reports Server (NTRS)
Horowitz, Paul
1995-01-01
Research was accomplished during the third year of the grant on: BETA architecture, an FFT array, a feature extractor, the Pentium array and workstation, and a radio astronomy spectrometer. The BETA (this SETI project) system architecture has been evolving generally in the direction of greater robustness against terrestrial interference. The new design adds a powerful state-memory feature, multiple simultaneous thresholds, and the ability to integrate multiple spectra in a flexible state-machine architecture. The FFT array is reported with regards to its hardware verification, array production, and control. The feature extractor is responsible for maintaining a moving baseline, recognizing large spectral peaks, following the progress of previously identified interesting spectral regions, and blocking signals from regions previously identified as containing interference. The Pentium array consists of 21 Pentium-based PC motherboards, each with 16 MByte of RAM and an Ethernet interface. Each motherboard receives and processes the data from a feature extractor/correlator board set, passing on the results of a first analysis to the central Unix workstation (through which each is also booted). The radio astronomy spectrometer is a technological spinoff from SETI work. It is proposed to be a combined spectrometer and power-accumulator, for use at Arecibo Observatory to search for neutral hydrogen emission from condensations of neutral hydrogen at high redshift (z = 5).
Content-based image retrieval with ontological ranking
NASA Astrophysics Data System (ADS)
Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.
2010-02-01
Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping is different from pure visual similarity clustering. More specifically, the inferred concepts of each image in the group are examined in the context of a huge concept ontology to determine their true relations with what people have in mind when doing image search.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-19
... Conservation Area (NCA), addressed in the September 2008 Resource Management Plan (RMP) and Record of Decision... an array of management actions designed to conserve natural and cultural resources on BLM... analysis can be found in Chapter 4 of the Proposed Resource Management Plan and Final Environmental Impact...
Bringing Technology to the Resource Manager ... and Not the Reverse
Daniel L. Schmoldt
1992-01-01
Many natural resource managers envision their jobs as pressed between the resources that they have a mandate to manage and the technological aides that are essential tools to conduct those management activities. On the one hand, managers are straining to understand an extremely complex array of natural systems and the management pressures placed on those systems. Then...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Bennett
The Arizona Commerce Authority (ACA) conducted an Innovation in Advanced Manufacturing Grant Competition to support and grow southern and central Arizona’s Aerospace and Defense (A&D) industry and its supply chain. The problem statement for this grant challenge was that many A&D machining processes utilize older generation CNC machine tool technologies that can result an inefficient use of resources – energy, time and materials – compared to the latest state-of-the-art CNC machines. Competitive awards funded projects to develop innovative new tools and technologies that reduce energy consumption for older generation machine tools and foster working relationships between industry small to medium-sizedmore » manufacturing enterprises and third-party solution providers. During the 42-month term of this grant, 12 competitive awards were made. Final reports have been included with this submission.« less
Dictionary Based Machine Translation from Kannada to Telugu
NASA Astrophysics Data System (ADS)
Sindhu, D. V.; Sagar, B. M.
2017-08-01
Machine Translation is a task of translating from one language to another language. For the languages with less linguistic resources like Kannada and Telugu Dictionary based approach is the best approach. This paper mainly focuses on Dictionary based machine translation for Kannada to Telugu. The proposed methodology uses dictionary for translating word by word without much correlation of semantics between them. The dictionary based machine translation process has the following sub process: Morph analyzer, dictionary, transliteration, transfer grammar and the morph generator. As a part of this work bilingual dictionary with 8000 entries is developed and the suffix mapping table at the tag level is built. This system is tested for the children stories. In near future this system can be further improved by defining transfer grammar rules.
Colorimetric Recognition of Aldehydes and Ketones.
Li, Zheng; Fang, Ming; LaGasse, Maria K; Askim, Jon R; Suslick, Kenneth S
2017-08-07
A colorimetric sensor array has been designed for the identification of and discrimination among aldehydes and ketones in vapor phase. Due to rapid chemical reactions between the solid-state sensor elements and gaseous analytes, distinct color difference patterns were produced and digitally imaged for chemometric analysis. The sensor array was developed from classical spot tests using aniline and phenylhydrazine dyes that enable molecular recognition of a wide variety of aliphatic or aromatic aldehydes and ketones, as demonstrated by hierarchical cluster, principal component, and support vector machine analyses. The aldehyde/ketone-specific sensors were further employed for differentiation among and identification of ten liquor samples (whiskies, brandy, vodka) and ethanol controls, showing its potential applications in the beverage industry. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Demonstration of Advanced EMI Models for Live-Site UXO Discrimination at Waikoloa, Hawaii
2015-12-01
magnetic source models PNN Probabilistic Neural Network SERDP Strategic Environmental Research and Development Program SLO San Luis Obispo...SNR Signal to noise ratio SVM Support vector machine TD Time Domain TEMTADS Time Domain Electromagnetic Towed Array Detection System TOI... intrusive procedure, which was used by Parsons at WMA, failed to document accurately all intrusive results, or failed to detect and clear all UXO like
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2013-09-01
implemented to significantly decrease the IIR system response time, especially when artifacts were highly reproducible in consecutive stimulation...cycles. The proposed system architecture was hardware- implemented on a field- programmable gate array (FPGA) and tested using two sets of prerecorded...its FPGA implementation and testing with prerecorded neural datasets are reported in a manuscript currently in press with the IEEE Transactions on
The postdoctoral apprenticeship.
Neill, Ushma S
2016-10-03
Much has been written already about whether the scientific machine is churning out too many PhDs and postdocs when there are a limited number of academic jobs and the competition for funding and space in competitive journals is intense. But gratifyingly, there exists a vast array of other scientific careers. We need to mentor and advise trainees about the diverse and rewarding professional opportunities that are available beyond the postdoctoral apprenticeship period.
Sputter coating of microspherical substrates by levitation
Lowe, Arthur T.; Hosford, Charles D.
1981-01-01
Microspheres are substantially uniformly coated with metals or nonmetals by simultaneously levitating them and sputter coating them at total chamber pressures less than 1 torr. A collimated hole structure 12 comprising a parallel array of upwardly projecting individual gas outlets 16 is machined out to form a dimple 11. Glass microballoons, which are particularly useful in laser fusion applications, can be substantially uniformly coated using the coating method and apparatus.
A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1994-01-01
We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.
Scheiding, Sebastian; Yi, Allen Y; Gebhardt, Andreas; Li, Lei; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas
2011-11-21
We report what is to our knowledge the first approach to diamond turn microoptical lens array on a steep curved substrate by use of a voice coil fast tool servo. In recent years ultraprecision machining has been employed to manufacture accurate optical components with 3D structure for beam shaping, imaging and nonimaging applications. As a result, geometries that are difficult or impossible to manufacture using lithographic techniques might be fabricated using small diamond tools with well defined cutting edges. These 3D structures show no rotational symmetry, but rather high frequency asymmetric features thus can be treated as freeform geometries. To transfer the 3D surface data with the high frequency freeform features into a numerical control code for machining, the commonly piecewise differentiable surfaces are represented as a cloud of individual points. Based on this numeric data, the tool radius correction is calculated to account for the cutting-edge geometry. Discontinuities of the cutting tool locations due to abrupt slope changes on the substrate surface are bridged using cubic spline interpolation.When superimposed with the trajectory of the rotationally symmetric substrate the complete microoptical geometry in 3D space is established. Details of the fabrication process and performance evaluation are described. © 2011 Optical Society of America
MEMS deformable mirror embedded wavefront sensing and control system
NASA Astrophysics Data System (ADS)
Owens, Donald; Schoen, Michael; Bush, Keith
2006-01-01
Electrostatic Membrane Deformable Mirror (MDM) technology developed using silicon bulk micro-machining techniques offers the potential of providing low-cost, compact wavefront control systems for diverse optical system applications. Electrostatic mirror construction using bulk micro-machining allows for custom designs to satisfy wavefront control requirements for most optical systems. An electrostatic MDM consists of a thin membrane, generally with a thin metal or multi-layer high-reflectivity coating, suspended over an actuator pad array that is connected to a high-voltage driver. Voltages applied to the array elements deflect the membrane to provide an optical surface capable of correcting for measured optical aberrations in a given system. Electrostatic membrane DM designs are derived from well-known principles of membrane mechanics and electrostatics, the desired optical wavefront control requirements, and the current limitations of mirror fabrication and actuator drive electronics. MDM performance is strongly dependent on mirror diameter and air damping in meeting desired spatial and temporal frequency requirements. In this paper, we present wavefront control results from an embedded wavefront control system developed around a commercially available high-speed camera and an AgilOptics Unifi MDM driver using USB 2.0 communications and the Linux development environment. This new product, ClariFast TM, combines our previous Clarifi TM product offering into a faster more streamlined version dedicated strictly to Hartmann Wavefront sensing.
On-line Machine Learning and Event Detection in Petascale Data Streams
NASA Astrophysics Data System (ADS)
Thompson, David R.; Wagstaff, K. L.
2012-01-01
Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data mining. This talk describes research performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2012, All Rights Reserved. U.S. Government support acknowledged.
Development Of A Three-Dimensional Circuit Integration Technology And Computer Architecture
NASA Astrophysics Data System (ADS)
Etchells, R. D.; Grinberg, J.; Nudd, G. R.
1981-12-01
This paper is the first of a series 1,2,3 describing a range of efforts at Hughes Research Laboratories, which are collectively referred to as "Three-Dimensional Microelectronics." The technology being developed is a combination of a unique circuit fabrication/packaging technology and a novel processing architecture. The packaging technology greatly reduces the parasitic impedances associated with signal-routing in complex VLSI structures, while simultaneously allowing circuit densities orders of magnitude higher than the current state-of-the-art. When combined with the 3-D processor architecture, the resulting machine exhibits a one- to two-order of magnitude simultaneous improvement over current state-of-the-art machines in the three areas of processing speed, power consumption, and physical volume. The 3-D architecture is essentially that commonly referred to as a "cellular array", with the ultimate implementation having as many as 512 x 512 processors working in parallel. The three-dimensional nature of the assembled machine arises from the fact that the chips containing the active circuitry of the processor are stacked on top of each other. In this structure, electrical signals are passed vertically through the chips via thermomigrated aluminum feedthroughs. Signals are passed between adjacent chips by micro-interconnects. This discussion presents a broad view of the total effort, as well as a more detailed treatment of the fabrication and packaging technologies themselves. The results of performance simulations of the completed 3-D processor executing a variety of algorithms are also presented. Of particular pertinence to the interests of the focal-plane array community is the simulation of the UNICORNS nonuniformity correction algorithms as executed by the 3-D architecture.
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
Distributed Antenna-Coupled TES for FIR Detectors Arrays
NASA Technical Reports Server (NTRS)
Day, Peter K.; Leduc, Henry G.; Dowell, C. Darren; Lee, Richard A.; Zmuidzinas, Jonas
2007-01-01
We describe a new architecture for a superconducting detector for the submillimeter and far-infrared. This detector uses a distributed hot-electron transition edge sensor (TES) to collect the power from a focal-plane-filling slot antenna array. The sensors lay directly across the slots of the antenna and match the antenna impedance of about 30 ohms. Each pixel contains many sensors that are wired in parallel as a single distributed TES, which results in a low impedance that readily matches to a multiplexed SQUID readout These detectors are inherently polarization sensitive, with very low cross-polarization response, but can also be configured to sum both polarizations. The dual-polarization design can have a bandwidth of 50The use of electron-phonon decoupling eliminates the need for micro-machining, making the focal plane much easier to fabricate than with absorber-coupled, mechanically isolated pixels. We discuss applications of these detectors and a hybridization scheme compatible with arrays of tens of thousands of pixels.
Precise and Efficient Static Array Bound Checking for Large Embedded C Programs
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.
ERIC Educational Resources Information Center
Liao, Yuan
2011-01-01
The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…
Design Tools for Evaluating Multiprocessor Programs
1976-07-01
than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components
The development of machine technology processing for earth resource survey
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1970-01-01
The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.
Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools
ERIC Educational Resources Information Center
Amlie, Thomas T.
2009-01-01
A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…
A STORAGE AND RETRIEVAL SYSTEM FOR DOCUMENTS IN INSTRUCTIONAL RESOURCES. REPORT NO. 13.
ERIC Educational Resources Information Center
DIAMOND, ROBERT M.; LEE, BERTA GRATTAN
IN ORDER TO IMPROVE INSTRUCTION WITHIN TWO-YEAR LOWER DIVISION COURSES, A COMPREHENSIVE RESOURCE LIBRARY WAS DEVELOPED AND A SIMPLIFIED CATALOGING AND INFORMATION RETRIEVAL SYSTEM WAS APPLIED TO IT. THE ROYAL MCBEE "KEYDEX" SYSTEM, CONTAINING THREE MAJOR COMPONENTS--A PUNCH MACHINE, FILE CARDS, AND A LIGHT BOX--WAS USED. CARDS WERE HEADED WITH KEY…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobson, Paul T; Hagerman, George; Scott, George
This project estimates the naturally available and technically recoverable U.S. wave energy resources, using a 51-month Wavewatch III hindcast database developed especially for this study by National Oceanographic and Atmospheric Administration's (NOAA's) National Centers for Environmental Prediction. For total resource estimation, wave power density in terms of kilowatts per meter is aggregated across a unit diameter circle. This approach is fully consistent with accepted global practice and includes the resource made available by the lateral transfer of wave energy along wave crests, which enables wave diffraction to substantially reestablish wave power densities within a few kilometers of a linear array,more » even for fixed terminator devices. The total available wave energy resource along the U.S. continental shelf edge, based on accumulating unit circle wave power densities, is estimated to be 2,640 TWh/yr, broken down as follows: 590 TWh/yr for the West Coast, 240 TWh/yr for the East Coast, 80 TWh/yr for the Gulf of Mexico, 1570 TWh/yr for Alaska, 130 TWh/yr for Hawaii, and 30 TWh/yr for Puerto Rico. The total recoverable wave energy resource, as constrained by an array capacity packing density of 15 megawatts per kilometer of coastline, with a 100-fold operating range between threshold and maximum operating conditions in terms of input wave power density available to such arrays, yields a total recoverable resource along the U.S. continental shelf edge of 1,170 TWh/yr, broken down as follows: 250 TWh/yr for the West Coast, 160 TWh/yr for the East Coast, 60 TWh/yr for the Gulf of Mexico, 620 TWh/yr for Alaska, 80 TWh/yr for Hawaii, and 20 TWh/yr for Puerto Rico.« less
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2017-12-01
Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.
Three-dimensionally printed biological machines powered by skeletal muscle.
Cvetkovic, Caroline; Raman, Ritu; Chan, Vincent; Williams, Brian J; Tolish, Madeline; Bajaj, Piyush; Sakar, Mahmut Selman; Asada, H Harry; Saif, M Taher A; Bashir, Rashid
2014-07-15
Combining biological components, such as cells and tissues, with soft robotics can enable the fabrication of biological machines with the ability to sense, process signals, and produce force. An intuitive demonstration of a biological machine is one that can produce motion in response to controllable external signaling. Whereas cardiac cell-driven biological actuators have been demonstrated, the requirements of these machines to respond to stimuli and exhibit controlled movement merit the use of skeletal muscle, the primary generator of actuation in animals, as a contractile power source. Here, we report the development of 3D printed hydrogel "bio-bots" with an asymmetric physical design and powered by the actuation of an engineered mammalian skeletal muscle strip to result in net locomotion of the bio-bot. Geometric design and material properties of the hydrogel bio-bots were optimized using stereolithographic 3D printing, and the effect of collagen I and fibrin extracellular matrix proteins and insulin-like growth factor 1 on the force production of engineered skeletal muscle was characterized. Electrical stimulation triggered contraction of cells in the muscle strip and net locomotion of the bio-bot with a maximum velocity of ∼ 156 μm s(-1), which is over 1.5 body lengths per min. Modeling and simulation were used to understand both the effect of different design parameters on the bio-bot and the mechanism of motion. This demonstration advances the goal of realizing forward-engineered integrated cellular machines and systems, which can have a myriad array of applications in drug screening, programmable tissue engineering, drug delivery, and biomimetic machine design.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
Design of control system for optical fiber drawing machine driven by double motor
NASA Astrophysics Data System (ADS)
Yu, Yue Chen; Bo, Yu Ming; Wang, Jun
2018-01-01
Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.
An active UHF RFID localization system for fawn saving
NASA Astrophysics Data System (ADS)
Eberhardt, M.; Lehner, M.; Ascher, A.; Allwang, M.; Biebl, E. M.
2015-11-01
We present a localization concept for active UHF RFID transponders which enables mowing machine drivers to detect and localize marked fawns. The whole system design and experimental results with transponders located near the ground in random orientations in a meadow area are shown. The communication flow between reader and transponders is realized as a dynamic master-slave concept. Multiple marked fawns will be localized by processing detected transponders sequentially. With an eight-channel-receiver with integrated calibration method one can estimate the direction-of-arrival by measuring the phases of the transponder signals up to a range of 50 m in all directions. For further troubleshooting array manifolds have been measured. An additional hand-held receiver with a two-channel receiver allows a guided approaching search without endangering the fawn by the mowing machine.
A flexible ultrasound transducer array with micro-machined bulk PZT.
Wang, Zhe; Xue, Qing-Tang; Chen, Yuan-Quan; Shu, Yi; Tian, He; Yang, Yi; Xie, Dan; Luo, Jian-Wen; Ren, Tian-Ling
2015-01-23
This paper proposes a novel flexible piezoelectric micro-machined ultrasound transducer, which is based on PZT and a polyimide substrate. The transducer is made on the polyimide substrate and packaged with medical polydimethylsiloxane. Instead of etching the PZT ceramic, this paper proposes a method of putting diced PZT blocks into holes on the polyimide which are pre-etched. The device works in d31 mode and the electromechanical coupling factor is 22.25%. Its flexibility, good conformal contacting with skin surfaces and proper resonant frequency make the device suitable for heart imaging. The flexible packaging ultrasound transducer also has a good waterproof performance after hundreds of ultrasonic electric tests in water. It is a promising ultrasound transducer and will be an effective supplementary ultrasound imaging method in the practical applications.
A study of the utilization of EREP data from the Wabash River basin
NASA Technical Reports Server (NTRS)
Silva, L. F. (Principal Investigator)
1976-01-01
The author has identified the following significant results. The study of the multispectral data sets indicate that better land use delineation using machine processing techniques can be obtained with data from multispectral scanners than digitized S190A photographic sensor data. Results of the multiemulsion photographic data set were a little better than the multiband photographic data set. Comparison results of the interim and filtered S191 data indicate that the data were improved some for machine processing techniques. Results of the S191 X-5 detector array studied over a wintertime scene indicate that a good quality far infrared channel can be useful. The S191 spectroradiometer study results indicate that the data from the S191 was usable, and it was possible to estimate the path radiance.
A high performance parallel algorithm for 1-D FFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, R.C.; Gustavson, F.G.; Zubair, M.
1994-12-31
In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less
Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan
Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less
NASA Astrophysics Data System (ADS)
Chang, Yao-Chung; Lai, Chin-Feng; Chuang, Chi-Cheng; Hou, Cheng-Yu
2018-04-01
With the progress of science and technology, more and more machines are adpot to help human life better and more convenient. When the machines have been used for a longer period of time so that the machine components are getting old, the amount of power comsumption will increase and easily cause the machine to overheat. This also causes a waste of invisible resources. If the Internet of Everything (IoE) technologies are able to be applied into the enterprise information systems for monitoring the machines use time, it can not only make energy can be effectively used, but aslo create a safer living environment. To solve the above problem, the correlation predict model is established to collect the data of power consumption converted into power eigenvalues. This study takes the power eigenvalue as the independent variable and use time as the dependent variable in order to establish the decline curve. Ultimately, the scoring and estimation modules are employed to seek the best power eigenvalue as the independent variable. To predict use time, the correlation is discussed between the use time and the decline curve to improve the entire behavioural analysis of the facilitate recognition of the use time of machines.
The application of machine learning techniques in the clinical drug therapy.
Meng, Huan-Yu; Jin, Wan-Lin; Yan, Cheng-Kai; Yang, Huan
2018-05-25
The development of a novel drug is an extremely complicated process that includes the target identification, design and manufacture, and proper therapy of the novel drug, as well as drug dose selection, drug efficacy evaluation, and adverse drug reaction control. Due to the limited resources, high costs, long duration, and low hit-to-lead ratio in the development of pharmacogenetics and computer technology, machine learning techniques have assisted novel drug development and have gradually received more attention by researchers. According to current research, machine learning techniques are widely applied in the process of the discovery of new drugs and novel drug targets, the decision surrounding proper therapy and drug dose, and the prediction of drug efficacy and adverse drug reactions. In this article, we discussed the history, workflow, and advantages and disadvantages of machine learning techniques in the processes mentioned above. Although the advantages of machine learning techniques are fairly obvious, the application of machine learning techniques is currently limited. With further research, the application of machine techniques in drug development could be much more widespread and could potentially be one of the major methods used in drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Genetic programming applied to RFI mitigation in radio astronomy
NASA Astrophysics Data System (ADS)
Staats, K.
2016-12-01
Genetic Programming is a type of machine learning that employs a stochastic search of a solutions space, genetic operators, a fitness function, and multiple generations of evolved programs to resolve a user-defined task, such as the classification of data. At the time of this research, the application of machine learning to radio astronomy was relatively new, with a limited number of publications on the subject. Genetic Programming had never been applied, and as such, was a novel approach to this challenging arena. Foundational to this body of research, the application Karoo GP was developed in the programming language Python following the fundamentals of tree-based Genetic Programming described in "A Field Guide to Genetic Programming" by Poli, et al. Karoo GP was tasked with the classification of data points as signal or radio frequency interference (RFI) generated by instruments and machinery which makes challenging astronomers' ability to discern the desired targets. The training data was derived from the output of an observation run of the KAT-7 radio telescope array built by the South African Square Kilometre Array (SKA-SA). Karoo GP, kNN, and SVM were comparatively employed, the outcome of which provided noteworthy correlations between input parameters, the complexity of the evolved hypotheses, and performance of raw data versus engineered features. This dissertation includes description of novel approaches to GP, such as upper and lower limits to the size of syntax trees, an auto-scaling multiclass classifier, and a Numpy array element manager. In addition to the research conducted at the SKA-SA, it is described how Karoo GP was applied to fine-tuning parameters of a weather prediction model at the South African Astronomical Observatory (SAAO), to glitch classification at the Laser Interferometer Gravitational-wave Observatory (LIGO), and to astro-particle physics at The Ohio State University.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, S. L.; Cinson, A. D.; Diaz, A. A.
2015-11-23
In the summer of 2009, Pacific Northwest National Laboratory (PNNL) staff traveled to the Electric Power Research Institute (EPRI) NDE Center in Charlotte, North Carolina, to conduct phased-array ultrasonic testing on a large bore, reactor coolant pump nozzle-to-safe-end mockup. This mockup was fabricated by FlawTech, Inc. and the configuration originated from the Port St. Lucie nuclear power plant. These plants are Combustion Engineering-designed reactors. This mockup consists of a carbon steel elbow with stainless steel cladding joined to a cast austenitic stainless steel (CASS) safe-end with a dissimilar metal weld and is owned by Florida Power & Light. The objectivemore » of this study, and the data acquisition exercise held at the EPRI NDE Center, were focused on evaluating the capabilities of advanced, low-frequency phased-array ultrasonic testing (PA-UT) examination techniques for detection and characterization of implanted circumferential flaws and machined reflectors in a thick-section CASS dissimilar metal weld component. This work was limited to PA-UT assessments using 500 kHz and 800 kHz probes on circumferential flaws only, and evaluated detection and characterization of these flaws and machined reflectors from the CASS safe-end side only. All data were obtained using spatially encoded, manual scanning techniques. The effects of such factors as line-scan versus raster-scan examination approaches were evaluated, and PA-UT detection and characterization performance as a function of inspection frequency/wavelength, were also assessed. A comparative assessment of the data is provided, using length-sizing root-mean-square-error and position/localization results (flaw start/stop information) as the key criteria for flaw characterization performance. In addition, flaw signal-to-noise ratio was identified as the key criterion for detection performance.« less
NASA Astrophysics Data System (ADS)
Kirkpatrick, B. A.; Currier, R. D.; Simoniello, C.
2016-02-01
The tagging and tracking of aquatic animals using acoustic telemetry hardware has traditionally been the purview of individual researchers that specialize in single species. Concerns over data privacy and unauthorized use of receiver arrays have prevented the construction of large-scale, multi-species, multi-institution, multi-researcher collaborative acoustic arrays. We have developed a toolset to build the new portal using the Flask microframework, Python language, and Twitter bootstrap. Initial feedback has been overwhelmingly positive. The privacy policy has been praised for its granularity: principal investigators can choose between three levels of privacy for all data and hardware: Completely private - viewable only by the PI Visible to iTAG members Visible to the general public At the time of this writing iTAG is still in the beta stage, but the feedback received to date indicates that with the proper design and security features, and an iterative cycle of feedback from potential members, constructing a collaborative acoustic tracking network system is possible. Initial usage will be limited to the entry and searching for `orphan/mystery' tags, with the integration of historical array deployments and data following shortly thereafter. We have also been working with staff from the Ocean Tracking Network to allow for integration of the two systems. The database schema of iTAG is based on the marine metadata convention for acoustic telemetry. This should permit machine-to-machine data exchange between iTAG and OTN. The integration of animal telemetry data into the GCOOS portal will allow researchers to easily access the physiochemical oceanography data, thus allowing for a more in depth understanding of animal response and usage patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennion, Kevin; Moreno, Gilberto
2015-09-29
Thermal management for electric machines (motors/ generators) is important as the automotive industry continues to transition to more electrically dominant vehicle propulsion systems. Cooling of the electric machine(s) in some electric vehicle traction drive applications is accomplished by impinging automatic transmission fluid (ATF) jets onto the machine's copper windings. In this study, we provide the results of experiments characterizing the thermal performance of ATF jets on surfaces representative of windings, using Ford's Mercon LV ATF. Experiments were carried out at various ATF temperatures and jet velocities to quantify the influence of these parameters on heat transfer coefficients. Fluid temperatures weremore » varied from 50 degrees C to 90 degrees C to encompass potential operating temperatures within an automotive transaxle environment. The jet nozzle velocities were varied from 0.5 to 10 m/s. The experimental ATF heat transfer coefficient results provided in this report are a useful resource for understanding factors that influence the performance of ATF-based cooling systems for electric machines.« less
NASA Technical Reports Server (NTRS)
Albyn, K.; Finckenor, M.
2006-01-01
The International Space Station (ISS) solar arrays utilize MD-944 diode tape with silicone pressure-sensitive adhesive to protect the underlying diodes and also provide a high-emittance surface. On-orbit, the silicone adhesive will be exposed and ultimately convert to a glass-like silicate due to atomic oxygen (AO). The current operational plan is to retract ISS solar array P6 and leave it stored under load for a long duration (6 mo or more). The exposed silicone adhesive must not cause the solar array to stick to itself or cause the solar array to fail during redeployment. The Environmental Effects Branch at Marshall Space Flight Center, under direction from the ISS Program Office Environments Team, performed simulated space environment exposures with 5-eV AO, near ultraviolet radiation and ionizing radiation. The exposed diode tape samples were put under preload and then the resulting blocking force was measured using a tensile test machine. Test results indicate that high-energy AO, ultraviolet radiation, and electron ionizing radiation exposure all reduce the blocking force for a silicone-to-silicone bond. AO exposure produces the most significant reduction in blocking force
Flexible Neural Electrode Array Based-on Porous Graphene for Cortical Microstimulation and Sensing
NASA Astrophysics Data System (ADS)
Lu, Yichen; Lyu, Hongming; Richardson, Andrew G.; Lucas, Timothy H.; Kuzum, Duygu
2016-09-01
Neural sensing and stimulation have been the backbone of neuroscience research, brain-machine interfaces and clinical neuromodulation therapies for decades. To-date, most of the neural stimulation systems have relied on sharp metal microelectrodes with poor electrochemical properties that induce extensive damage to the tissue and significantly degrade the long-term stability of implantable systems. Here, we demonstrate a flexible cortical microelectrode array based on porous graphene, which is capable of efficient electrophysiological sensing and stimulation from the brain surface, without penetrating into the tissue. Porous graphene electrodes show superior impedance and charge injection characteristics making them ideal for high efficiency cortical sensing and stimulation. They exhibit no physical delamination or degradation even after 1 million biphasic stimulation cycles, confirming high endurance. In in vivo experiments with rodents, same array is used to sense brain activity patterns with high spatio-temporal resolution and to control leg muscles with high-precision electrical stimulation from the cortical surface. Flexible porous graphene array offers a minimally invasive but high efficiency neuromodulation scheme with potential applications in cortical mapping, brain-computer interfaces, treatment of neurological disorders, where high resolution and simultaneous recording and stimulation of neural activity are crucial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayakumar, R.; Martovetsky, N.N.; Perfect, S.A.
A glass-polyimide insulation system has been proposed by the US team for use in the Central Solenoid (CS) coil of the international Thermonuclear Experimental Reactor (ITER) machine and it is planned to use this system in the CS model coil inner module. The turn insulation will consist of 2 layers of combined prepreg and Kapton. Each layer is 50% overlapped with a butt wrap of prepreg and an overwrap of S glass. The coil layers will be separated by a glass-resin composite and impregnated in a VPI process. Small scale tests on the various components of the insulation are complete.more » It is planned to fabricate and test the insulation in a 4 x 4 insulated CS conductor array which will include the layer insulation and be vacuum impregnated. The conductor array will be subjected to 20 thermal cycles and 100000 mechanical load cycles in a Liquid Nitrogen environment. These loads are similar to those seen in the CS coil design. The insulation will be electrically tested at several stages during mechanical testing. This paper will describe the array configuration, fabrication: process, instrumentation, testing configuration, and supporting analyses used in selecting the array and test configurations.« less
Chen, Yu-Liang; Jiang, Hong-Ren
2017-06-23
This article provides a simple method to prepare partially or fully coated metallic particles and to perform the rapid fabrication of electrode arrays, which can facilitate electrical experiments in microfluidic devices. Janus particles are asymmetric particles that contain two different surface properties on their two sides. To prepare Janus particles, a monolayer of silica particles is prepared by a drying process. Gold (Au) is deposited on one side of each particle using a sputtering device. The fully coated metallic particles are completed after the second coating process. To analyze the electrical surface properties of Janus particles, alternating current (AC) electrokinetic measurements, such as dielectrophoresis (DEP) and electrorotation (EROT)- which require specifically designed electrode arrays in the experimental device- are performed. However, traditional methods to fabricate electrode arrays, such as the photolithographic technique, require a series of complicated procedures. Here, we introduce a flexible method to fabricate a designed electrode array. An indium tin oxide (ITO) glass is patterned by a fiber laser marking machine (1,064 nm, 20 W, 90 to 120 ns pulse-width, and 20 to 80 kHz pulse repetition frequency) to create a four-phase electrode array. To generate the four-phase electric field, the electrodes are connected to a 2-channel function generator and to two invertors. The phase shift between the adjacent electrodes is set at either 90° (for EROT) or 180° (for DEP). Representative results of AC electrokinetic measurements with a four-phase ITO electrode array are presented.
High-density arrays of x-ray microcalorimeters for Constellation-X
NASA Astrophysics Data System (ADS)
Kilbourne, C. A.; Bandler, S. R.; Chervenak, J. A.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Saab, T.; Sadleir, J.
2005-12-01
We have been developing x-ray microcalorimeters for the Constellation-X mission. Devices based on superconducting transition edge sensors (TES) have demonstrated the potential to meet the Constellation-X requirements for spectral resolution, speed, and array scale (> 1000 pixels) in a close-packed geometry. In our part of the GSFC/NIST collaboration on this technology development, we have been concentrating on the fabrication of arrays of pixels suitable for the Constellation-X reference configuration. We have fabricated 8x8 arrays with 0.25-mm pixels arranged with 92% fill factor. The pixels are based on Mo/Au TES and Bi/Cu absorbers. We have achieved a resolution of 4.9 eV FWHM at 6 keV in such devices. Studies of the thermal transport in our Bi/Cu absorbers have shown that, while there is room for improvement, for 0.25 mm pixels our existing absorber design is adequate to avoid line-broadening from position dependence caused by thermal diffusion. In order to push closer to the 4-eV requirement and 2-eV goal at 6 keV, we are refining the design of the TES and the interface to the absorber. For the 32x32 arrays ultimately needed for Constellation-X, signal lead routing and heatsinking will drive the design. We have had early successes with experiments in electroplating electrical vias and thermal busses into micro-machined features in silicon substrates. The next steps will be fabricating arrays that have all of the essential features of the required flight design, testing, and then engineering a prototype array for optimum performance.
USDA-ARS?s Scientific Manuscript database
High-throughput genotyping arrays provide a standardized resource for crop research communities that are useful for a breadth of applications including high-density genetic mapping, genome-wide association studies (GWAS), genomic selection (GS), candidate marker and quantitative trait loci (QTL) ide...
USDA-ARS?s Scientific Manuscript database
Carrot is one of the most economically important vegetables worldwide, however, genetic and genomic resources supporting carrot breeding remain limited. We developed a Diversity Arrays Technology (DArT) platform for wild and cultivated carrot and used it to investigate genetic diversity and to devel...
Radar Resource Management in a Dense Target Environment
2014-03-01
problem faced by networked MFRs . While relaxing our assumptions concerning information gain presents numerous challenges worth exploring, future research...linear programming MFR multifunction phased array radar MILP mixed integer linear programming NATO North Atlantic Treaty Organization PDF probability...1: INTRODUCTION Multifunction phased array radars ( MFRs ) are capable of performing various tasks in rapid succession. The performance of target search
NASA Astrophysics Data System (ADS)
Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki
At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.
Obligatory encoding of task-irrelevant features depletes working memory resources.
Marshall, Louise; Bays, Paul M
2013-02-18
Selective attention is often considered the "gateway" to visual working memory (VWM). However, the extent to which we can voluntarily control which of an object's features enter memory remains subject to debate. Recent research has converged on the concept of VWM as a limited commodity distributed between elements of a visual scene. Consequently, as memory load increases, the fidelity with which each visual feature is stored decreases. Here we used changes in recall precision to probe whether task-irrelevant features were encoded into VWM when individuals were asked to store specific feature dimensions. Recall precision for both color and orientation was significantly enhanced when task-irrelevant features were removed, but knowledge of which features would be probed provided no advantage over having to memorize both features of all items. Next, we assessed the effect an interpolated orientation-or color-matching task had on the resolution with which orientations in a memory array were stored. We found that the presence of orientation information in the second array disrupted memory of the first array. The cost to recall precision was identical whether the interfering features had to be remembered, attended to, or could be ignored. Therefore, it appears that storing, or merely attending to, one feature of an object is sufficient to promote automatic encoding of all its features, depleting VWM resources. However, the precision cost was abolished when the match task preceded the memory array. So, while encoding is automatic, maintenance is voluntary, allowing resources to be reallocated to store new visual information.
Development of a low cost high precision three-layer 3D artificial compound eye.
Zhang, Hao; Li, Lei; McCray, David L; Scheiding, Sebastian; Naples, Neil J; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas; Yi, Allen Y
2013-09-23
Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.
Could Crop Height Impact the Wind Resource at Agriculturally Productive Wind Farm Sites?
NASA Astrophysics Data System (ADS)
Vanderwende, B. J.; Lundquist, J. K.
2013-12-01
The agriculture-intensive United States Midwest and Great Plains regions feature some of the best wind resources in the nation. Collocation of cropland and wind turbines introduces complex meteorological interactions that could affect both agriculture and wind power production. Crop management practices may modify the wind resource through alterations of land-surface properties. In this study, we used the Weather Research and Forecasting (WRF) model to estimate the impact of crop height variations on the wind resource in the presence of a large turbine array. We parameterized a hypothetical array of 121 1.8 MW turbines at the site of the 2011 Crop/Wind-energy Experiment field campaign using the WRF wind farm parameterization. We estimated the impact of crop choices on power production by altering the aerodynamic roughness length in a region approximately 65 times larger than that occupied by the turbine array. Roughness lengths of 10 cm and 25 cm represent a mature soy crop and a mature corn crop respectively. Results suggest that the presence of the mature corn crop reduces hub-height wind speeds and increases rotor-layer wind shear, even in the presence of a large wind farm which itself modifies the flow. During the night, the influence of the surface was dependent on the boundary layer stability, with strong stability inhibiting the surface drag from modifying the wind resource aloft. Further investigation is required to determine the optimal size, shape, and crop height of the roughness modification to maximize the economic benefit and minimize the cost of such crop management practices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Syam; Sitha
2015-06-15
Purpose: Determination of source dwell positions of HDR brachytherapy using 2D 729 ion chamber array Methods: Nucletron microselectron HDR and PTW 2D array were used for the study. Different dwell positions were assigned in the HDR machine. Rigid interstitial needles and vaginal applicator were positioned on the 2D array. The 2D array was exposed for this programmed dwell positions. The positional accuracy of the source was analyzed after the irradiation of the 2D array. This was repeated for different dwell positions. Different test plans were transferred from the Oncentra planning system and irradiated with the same applicator position on themore » 2D array. The results were analyzed using the in house developed excel program. Results: Assigned dwell positions versus corresponding detector response were analyzed. The results show very good agreement with the film measurements. No significant variation found between the planned and measured dwell positions. Average dose response with 2D array between the planned and nearby dwell positions was found to be 0.0804 Gy for vaginal cylinder applicator and 0.1234 Gy for interstitial rigid needles. Standard deviation between the doses for all the measured dwell positions for interstitial rigid needle for 1 cm spaced positions were found to be 0.33 and 0.37 for 2cm spaced dwell positions. For intracavitory vaginal applicator this was found to be 0.21 for 1 cm spaced dwell positions and 0.06 for 2cm spaced dwell positions. Intracavitory test plans reproduced on the 2D array with the same applicator positions shows the ideal dose distribution with the TPS planned. Conclusion: 2D array is a good tool for determining the dwell position of HDR brachytherapy. With the in-house developed program in excel it is easy and accurate. The traditional way with film analysis can be replaced by this method, as the films will be more costly.« less
AstroML: Python-powered Machine Learning for Astronomy
NASA Astrophysics Data System (ADS)
Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.
2014-01-01
As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.
Thruput Analysis of AFLC CYBER 73 Computers.
1981-12-01
Ref 2:14). This decision permitted a fast conversion effort with minimum programmer/analyst experience (Ref 34). Recently, as the conversion effort...converted (Ref 1:2). 2 . i i i II I i4 Moreover, many of the large data-file and machine-time- consuming systems were not included in the earlier...by LMT personnel revealed that during certain periods i.e., 0000-0800, the machine is normally reserved for the large 3 4 resource- consuming programs
NASA Astrophysics Data System (ADS)
Eilbert, Richard F.; Krug, Kristoph D.
1993-04-01
The Vivid Rapid Explosives Detection Systems is a true dual energy x-ray machine employing precision x-ray data acquisition in combination with unique algorithms and massive computation capability. Data from the system's 960 detectors is digitally stored and processed by powerful supermicro-computers organized as an expandable array of parallel processors. The algorithms operate on the dual energy attenuation image data to recognize and define objects in the milieu of the baggage contents. Each object is then systematically examined for a match to a specific effective atomic number, density, and mass threshold. Material properties are determined by comparing the relative attenuations of the 75 kVp and 150 kVp beams and electronically separating the object from its local background. Other heuristic algorithms search for specific configurations and provide additional information. The machine automatically detects explosive materials and identifies bomb components in luggage with high specificity and throughput, X-ray dose is comparable to that of current airport x-ray machines. The machine is also configured to find heroin, cocaine, and US currency by selecting appropriate settings on-site. Since January 1992, production units have been operationally deployed at U.S. and European airports for improved screening of checked baggage.
Fiber alignment apparatus and method
Kravitz, Stanley H.; Warren, Mial Evans; Snipes, Jr., Morris Burton; Armendariz, Marcelino Guadalupe; Word, V., James Cole
1997-01-01
A fiber alignment apparatus includes a micro-machined nickel spring that captures and locks arrays of single mode fibers into position. The design consists of a movable nickel leaf shaped spring and a fixed pocket where fibers are held. The fiber is slid between the spring and a fixed block, which tensions the spring. When the fiber reaches the pocket, it automatically falls into the pocket and is held by the pressure of the leaf spring.
Fiber alignment apparatus and method
Kravitz, S.H.; Warren, M.E.; Snipes, M.B. Jr.; Armendariz, M.G.; Word, J.C. V
1997-08-19
A fiber alignment apparatus includes a micro-machined nickel spring that captures and locks arrays of single mode fibers into position. The design consists of a movable nickel leaf shaped spring and a fixed pocket where fibers are held. The fiber is slid between the spring and a fixed block, which tensions the spring. When the fiber reaches the pocket, it automatically falls into the pocket and is held by the pressure of the leaf spring. 8 figs.
2015-01-01
generously offering the use of the printed circuit board (PCB) milling machine at the Royal Military College of Canada (RMC) as well as other lab... military research for over a century. However, continual technological advances in wireless communications along with widespread proliferation of...several decades, motivated initially by military applications. Over the past 10–15 years however, this topic has received widespread interest due in
SLURM: Simple Linux Utility for Resource Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M; Dunlap, C; Garlick, J
2002-07-08
Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, scheduling and stream copy modules. The design also includes a scalable, general-purpose communication infrastructure. This paper presents a overview of the SLURM architecture and functionality.
NASA Astrophysics Data System (ADS)
Cunningham, Sally Jo
The current crop of digital libraries for the computing community are strongly grounded in the conventional library paradigm: they provide indexes to support searching of collections of research papers. As such, these digital libraries are relatively impoverished; the present computing digital libraries omit many of the documents and resources that are currently available to computing researchers, and offer few browsing structures. These computing digital libraries were built 'top down': the resources and collection contents are forced to fit an existing digital library architecture. A 'bottom up' approach to digital library development would begin with an investigation of a community's information needs and available documents, and then design a library to organize those documents in such a way as to fulfill the community's needs. The 'home grown', informal information resources developed by and for the machine learning community are examined as a case study, to determine the types of information and document organizations 'native' to this group of researchers. The insights gained in this type of case study can be used to inform construction of a digital library tailored to this community.
Teplitzky, Benjamin A; Zitella, Laura M; Xiao, YiZi; Johnson, Matthew D
2016-01-01
Deep brain stimulation (DBS) leads with radially distributed electrodes have potential to improve clinical outcomes through more selective targeting of pathways and networks within the brain. However, increasing the number of electrodes on clinical DBS leads by replacing conventional cylindrical shell electrodes with radially distributed electrodes raises practical design and stimulation programming challenges. We used computational modeling to investigate: (1) how the number of radial electrodes impact the ability to steer, shift, and sculpt a region of neural activation (RoA), and (2) which RoA features are best used in combination with machine learning classifiers to predict programming settings to target a particular area near the lead. Stimulation configurations were modeled using 27 lead designs with one to nine radially distributed electrodes. The computational modeling framework consisted of a three-dimensional finite element tissue conductance model in combination with a multi-compartment biophysical axon model. For each lead design, two-dimensional threshold-dependent RoAs were calculated from the computational modeling results. The models showed more radial electrodes enabled finer resolution RoA steering; however, stimulation amplitude, and therefore spatial extent of the RoA, was limited by charge injection and charge storage capacity constraints due to the small electrode surface area for leads with more than four radially distributed electrodes. RoA shifting resolution was improved by the addition of radial electrodes when using uniform multi-cathode stimulation, but non-uniform multi-cathode stimulation produced equivalent or better resolution shifting without increasing the number of radial electrodes. Robust machine learning classification of 15 monopolar stimulation configurations was achieved using as few as three geometric features describing a RoA. The results of this study indicate that, for a clinical-scale DBS lead, more than four radial electrodes minimally improved in the ability to steer, shift, and sculpt axonal activation around a DBS lead and a simple feature set consisting of the RoA center of mass and orientation enabled robust machine learning classification. These results provide important design constraints for future development of high-density DBS arrays.
Teplitzky, Benjamin A.; Zitella, Laura M.; Xiao, YiZi; Johnson, Matthew D.
2016-01-01
Deep brain stimulation (DBS) leads with radially distributed electrodes have potential to improve clinical outcomes through more selective targeting of pathways and networks within the brain. However, increasing the number of electrodes on clinical DBS leads by replacing conventional cylindrical shell electrodes with radially distributed electrodes raises practical design and stimulation programming challenges. We used computational modeling to investigate: (1) how the number of radial electrodes impact the ability to steer, shift, and sculpt a region of neural activation (RoA), and (2) which RoA features are best used in combination with machine learning classifiers to predict programming settings to target a particular area near the lead. Stimulation configurations were modeled using 27 lead designs with one to nine radially distributed electrodes. The computational modeling framework consisted of a three-dimensional finite element tissue conductance model in combination with a multi-compartment biophysical axon model. For each lead design, two-dimensional threshold-dependent RoAs were calculated from the computational modeling results. The models showed more radial electrodes enabled finer resolution RoA steering; however, stimulation amplitude, and therefore spatial extent of the RoA, was limited by charge injection and charge storage capacity constraints due to the small electrode surface area for leads with more than four radially distributed electrodes. RoA shifting resolution was improved by the addition of radial electrodes when using uniform multi-cathode stimulation, but non-uniform multi-cathode stimulation produced equivalent or better resolution shifting without increasing the number of radial electrodes. Robust machine learning classification of 15 monopolar stimulation configurations was achieved using as few as three geometric features describing a RoA. The results of this study indicate that, for a clinical-scale DBS lead, more than four radial electrodes minimally improved in the ability to steer, shift, and sculpt axonal activation around a DBS lead and a simple feature set consisting of the RoA center of mass and orientation enabled robust machine learning classification. These results provide important design constraints for future development of high-density DBS arrays. PMID:27375470
A Parallel Vector Machine for the PM Programming Language
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2016-04-01
PM is a new programming language which aims to make the writing of computational geoscience models on parallel hardware accessible to scientists who are not themselves expert parallel programmers. It is based around the concept of communicating operators: language constructs that enable variables local to a single invocation of a parallelised loop to be viewed as if they were arrays spanning the entire loop domain. This mechanism enables different loop invocations (which may or may not be executing on different processors) to exchange information in a manner that extends the successful Communicating Sequential Processes idiom from single messages to collective communication. Communicating operators avoid the additional synchronisation mechanisms, such as atomic variables, required when programming using the Partitioned Global Address Space (PGAS) paradigm. Using a single loop invocation as the fundamental unit of concurrency enables PM to uniformly represent different levels of parallelism from vector operations through shared memory systems to distributed grids. This paper describes an implementation of PM based on a vectorised virtual machine. On a single processor node, concurrent operations are implemented using masked vector operations. Virtual machine instructions operate on vectors of values and may be unmasked, masked using a Boolean field, or masked using an array of active vector cell locations. Conditional structures (such as if-then-else or while statement implementations) calculate and apply masks to the operations they control. A shift in mask representation from Boolean to location-list occurs when active locations become sufficiently sparse. Parallel loops unfold data structures (or vectors of data structures for nested loops) into vectors of values that may additionally be distributed over multiple computational nodes and then split into micro-threads compatible with the size of the local cache. Inter-node communication is accomplished using standard OpenMP and MPI. Performance analyses of the PM vector machine, demonstrating its scaling properties with respect to domain size and the number of processor nodes will be presented for a range of hardware configurations. The PM software and language definition are being made available under unrestrictive MIT and Creative Commons Attribution licenses respectively: www.pm-lang.org.
A Study of Electrochemical Machining of Ti-6Al-4V in NaNO3 solution
NASA Astrophysics Data System (ADS)
Li, Hansong; Gao, Chuanping; Wang, Guoqian; Qu, Ningsong; Zhu, Di
2016-10-01
The titanium alloy Ti-6Al-4V is used in many industries including aviation, automobile manufacturing, and medical equipment, because of its low density, extraordinary corrosion resistance and high specific strength. Electrochemical machining (ECM) is a non-traditional machining method that allows applications to all kinds of metallic materials in regardless of their mechanical properties. It is widely applied to the machining of Ti-6Al-4V components, which usually takes place in a multicomponent electrolyte solution. In this study, a 10% NaNO3 solution was used to make multiple holes in Ti-6Al-4V sheets by through-mask electrochemical machining (TMECM). The polarization curve and current efficiency curve of this alloy were measured to understand the electrical properties of Ti-6Al-4V in a 10% NaNO3 solution. The measurements show that in a 10% NaNO3 solution, when the current density was above 6.56 A·cm-2, the current efficiency exceeded 100%. According to polarization curve and current efficiency curve, an orthogonal TMECM experiment was conducted on Ti-6Al-4V. The experimental results suggest that with appropriate process parameters, high-quality holes can be obtained in a 10% NaNO3 solution. Using the optimized process parameters, an array of micro-holes with an aperture of 2.52 mm to 2.57 mm and maximum roundness of 9 μm were produced using TMECM.
Fast 3D NIR systems for facial measurement and lip-reading
NASA Astrophysics Data System (ADS)
Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther
2017-05-01
Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.
[Artificial intelligence to assist clinical diagnosis in medicine].
Lugo-Reyes, Saúl Oswaldo; Maldonado-Colín, Guadalupe; Murata, Chiharu
2014-01-01
Medicine is one of the fields of knowledge that would most benefit from a closer interaction with Computer studies and Mathematics by optimizing complex, imperfect processes such as differential diagnosis; this is the domain of Machine Learning, a branch of Artificial Intelligence that builds and studies systems capable of learning from a set of training data, in order to optimize classification and prediction processes. In Mexico during the last few years, progress has been made on the implementation of electronic clinical records, so that the National Institutes of Health already have accumulated a wealth of stored data. For those data to become knowledge, they need to be processed and analyzed through complex statistical methods, as it is already being done in other countries, employing: case-based reasoning, artificial neural networks, Bayesian classifiers, multivariate logistic regression, or support vector machines, among other methodologies; to assist the clinical diagnosis of acute appendicitis, breast cancer and chronic liver disease, among a wide array of maladies. In this review we shift through concepts, antecedents, current examples and methodologies of machine learning-assisted clinical diagnosis.
Space satellite power system. [conversion of solar energy by photovoltaic solar cell arrays
NASA Technical Reports Server (NTRS)
Glaser, P. E.
1974-01-01
The concept of a satellite solar power station was studied. It is shown that it offers the potential to meet a significant portion of future energy needs, is pollution free, and is sparing of irreplaceable earth resources. Solar energy is converted by photovoltaic solar cell arrays to dc energy which in turn is converted into microwave energy in a large active phased array. The microwave energy is beamed to earth with little attenuation and is converted back to dc energy on the earth. Economic factors are considered.
NASA Astrophysics Data System (ADS)
Traversa, Fabio L.; Di Ventra, Massimiliano
2017-02-01
We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.
A wide array of effective water quality management and protection tools have been developed for urban environments, but implementation is hindered by a shortage of technology transfer opportunities. This National Conference on Tools for Urban Water Resource Management and Protec...
Maps | Geospatial Data Science | NREL
Maps Maps NREL develops an array of maps to support renewable energy development and generation resource in the United States by county Geothermal Maps of geothermal power plants, resources for enhanced geothermal systems, and hydrothermal sites in the United States Hydrogen Maps of hydrogen production
Lebow, Mahria
2014-04-01
The Arctic Health web site is a portal to Arctic-specific, health related content. The site provides expertly organized and annotated resources pertinent to northern peoples and places, including health information, research publications and environmental information. This site also features the Arctic Health Publications Database, which indexes an array of Arctic-related resources.
The Language Grid: supporting intercultural collaboration
NASA Astrophysics Data System (ADS)
Ishida, T.
2018-03-01
A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
Divett, T; Vennell, R; Stevens, C
2013-02-28
At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.
Mineral resources of the Cabinet Mountains Wilderness, Lincoln and Sanders Counties, Montana
Lindsey, David A.; Wells, J.D.; Van Loenen, R. E.; Banister, D.P.; Welded, R.D.; Zilka, N.T.; Schmauch, S.W.
1978-01-01
This report describes the differential array, of seismometers recently installed at the Hollister, California, Municipal Airport. Such an array of relatively closely spaced seismometers has already been installed in El Centro and provided useful information for both engineering and seismological applications from the 1979 Imperial Valley earthquake. Differential ground motions, principally due to horizontally propagating surface waves, are important in determining the stresses in such extended structures as large mat foundations for nuclear power stations, dams, bridges and pipelines. Further, analyses of the records of the 1979 Imperial Valley earthquake from the differential array have demonstrated the utility of short-baseline array data in tracking the progress of the rupture wave front of an earthquake.
Tissue matrix arrays for high throughput screening and systems analysis of cell function
Beachley, Vince Z.; Wolf, Matthew T.; Sadtler, Kaitlyn; Manda, Srikanth S.; Jacobs, Heather; Blatchley, Michael; Bader, Joel S.; Pandey, Akhilesh; Pardoll, Drew; Elisseeff, Jennifer H.
2015-01-01
Cell and protein arrays have demonstrated remarkable utility in the high-throughput evaluation of biological responses; however, they lack the complexity of native tissue and organs. Here, we describe tissue extracellular matrix (ECM) arrays for screening biological outputs and systems analysis. We spotted processed tissue ECM particles as two-dimensional arrays or incorporated them with cells to generate three-dimensional cell-matrix microtissue arrays. We then investigated the response of human stem, cancer, and immune cells to tissue ECM arrays originating from 11 different tissues, and validated the 2D and 3D arrays as representative of the in vivo microenvironment through quantitative analysis of tissue-specific cellular responses, including matrix production, adhesion and proliferation, and morphological changes following culture. The biological outputs correlated with tissue proteomics, and network analysis identified several proteins linked to cell function. Our methodology enables broad screening of ECMs to connect tissue-specific composition with biological activity, providing a new resource for biomaterials research and translation. PMID:26480475
Resource Sharing in a Network of Personal Computers.
1982-12-01
magnetic card, or a more secure identifier such as a machine-read fingerprint or voiceprint. Security and Protection 57 (3) (R, key) (5) (RB’ B, key) (B...operations are invoked via messages, a program and its terminal can easily be located on separate machines. In Spice, an interface process called Canvas ...request of a process. In Canvas , a process can only subdivide windows that it already has. On the other hand, the window manager treats the screen as a
2001-05-01
gathering and curing ton gins in the twentieth century (Maygarden 1994:68- of Spanish moss was also a major industry in Point 69). Coupee and Iberville...tinted whiteware; d) Clear yellow glass plate or bowl; e) Clear glass vessel, probably Vicks Cough Syrup; f) Depression glass; g) Clear yellow Owens...Hind’s Honey and Almond Cream, A. S. Hinds Co, Bloomfield, NJ); f-h) Machine-made glass bottles. was also collected. Owens and valve machines were
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.R.; Netrologic, Inc., San Diego, CA)
1988-01-01
Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.
NASA Astrophysics Data System (ADS)
Prokhorov, Sergey
2017-10-01
Building industry in a present day going through the hard times. Machine and mechanism exploitation cost, on a field of construction and installation works, takes a substantial part in total building construction expenses. There is a necessity to elaborate high efficient method, which allows not only to increase production, but also to reduce direct costs during machine fleet exploitation, and to increase its energy efficiency. In order to achieve the goal we plan to use modern methods of work production, hi-tech and energy saving machine tools and technologies, and use of optimal mechanization sets. As the optimization criteria there are exploitation prime cost and set efficiency. During actual task-solving process we made a conclusion, which shows that mechanization works, energy audit with production juxtaposition, prime prices and costs for energy resources allow to make complex machine fleet supply, improve ecological level and increase construction and installation work quality.
A Machine Learns to Predict the Stability of Tightly Packed Planetary Systems
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Silburt, Ari; Valencia, Diana; Menou, Kristen; Ali-Dib, Mohamad; Petrovich, Cristobal; Huang, Chelsea X.; Rein, Hanno; van Laerhoven, Christa; Paradise, Adiv; Obertas, Alysa; Murray, Norman
2016-12-01
The requirement that planetary systems be dynamically stable is often used to vet new discoveries or set limits on unconstrained masses or orbital elements. This is typically carried out via computationally expensive N-body simulations. We show that characterizing the complicated and multi-dimensional stability boundary of tightly packed systems is amenable to machine-learning methods. We find that training an XGBoost machine-learning algorithm on physically motivated features yields an accurate classifier of stability in packed systems. On the stability timescale investigated (107 orbits), it is three orders of magnitude faster than direct N-body simulations. Optimized machine-learning classifiers for dynamical stability may thus prove useful across the discipline, e.g., to characterize the exoplanet sample discovered by the upcoming Transiting Exoplanet Survey Satellite. This proof of concept motivates investing computational resources to train algorithms capable of predicting stability over longer timescales and over broader regions of phase space.
Mann, Georgianna; Kraak, Vivica; Serrano, Elena
2015-09-17
The study objective was to examine the nutritional quality of competitive foods and beverages (foods and beverages from vending machines and à la carte foods) available to rural middle school students, before implementation of the US Department of Agriculture's Smart Snacks in School standards in July 2014. In spring 2014, we audited vending machines and à la carte cafeteria foods and beverages in 8 rural Appalachian middle schools in Virginia. Few schools had vending machines. Few à la carte and vending machine foods met Smart Snacks in School standards (36.5%); however, most beverages did (78.2%). The major challenges to meeting standards were fat and sodium content of foods. Most competitive foods (62.2%) did not meet new standards, and rural schools with limited resources will likely require assistance to fully comply.
Development of a 3-D X-ray system
NASA Astrophysics Data System (ADS)
Evans, James Paul Owain
The interpretation of standard two-dimensional x-ray images by humans is often very difficult. This is due to the lack of visual cues to depth in an image which has been produced by transmitted radiation. The solution put forward in this research is to introduce binocular parallax, a powerful physiological depth cue, into the resultant shadowgraph x-ray image. This has been achieved by developing a binocular stereoscopic x-ray imaging technique, which can be used for both visual inspection by human observers and also for the extraction of three-dimensional co-ordinate information. The technique is implemented in the design and development of two experimental x-ray systems and also the development of measurement algorithms. The first experimental machine is based on standard linear x-ray detector arrays and was designed as an optimum configuration for visual inspection by human observers. However, it was felt that a combination of the 3-D visual inspection capability together with a measurement facility would enhance the usefulness of the technique. Therefore, both a theoretical and an empirical analysis of the co-ordinate measurement capability of the machine has been carried out. The measurement is based on close-range photogrammetric techniques. The accuracy of the measurement has been found to be of the order of 4mm in x, 3mm in y and 6mm in z. A second experimental machine was developed and based on the same technique as that used for the first machine. However, a major departure has been the introduction of a dual energy linear x-ray detector array which will allow, in general, the discrimination between organic and inorganic substances. The second design is a compromise between ease of visual inspection for human observers and optimum three-dimensional co-ordinate measurement capability. The system is part of an on going research programme into the possibility of introducing psychological depth cues into the resultant x-ray images. The research presented in this thesis was initiated to enhance the visual interpretation of complex x-ray images, specifically in response to problems encountered in the routine screening of freight by HM. Customs and Excise. This phase of the work culminated in the development of the first experimental machine. During this work the security industry was starting to adopt a new type of x-ray detector, namely the dual energy x-ray sensor. The Department of Transport made available funding to the Police Scientific Development Branch (P.S.D.B.), part of The Home Office Science and Technology Group, to investigate the possibility of utilising the dual energy sensor in a 3-D x-ray screening system. This phase of the work culminated in the development of the second experimental machine.
NASA Astrophysics Data System (ADS)
Wei, ZHANG; Tongyu, WU; Bowen, ZHENG; Shiping, LI; Yipo, ZHANG; Zejie, YIN
2018-04-01
A new neutron-gamma discriminator based on the support vector machine (SVM) method is proposed to improve the performance of the time-of-flight neutron spectrometer. The neutron detector is an EJ-299-33 plastic scintillator with pulse-shape discrimination (PSD) property. The SVM algorithm is implemented in field programmable gate array (FPGA) to carry out the real-time sifting of neutrons in neutron-gamma mixed radiation fields. This study compares the ability of the pulse gradient analysis method and the SVM method. The results show that this SVM discriminator can provide a better discrimination accuracy of 99.1%. The accuracy and performance of the SVM discriminator based on FPGA have been evaluated in the experiments. It can get a figure of merit of 1.30.
VML 3.0 Reactive Sequencing Objects and Matrix Math Operations for Attitude Profiling
NASA Technical Reports Server (NTRS)
Grasso, Christopher A.; Riedel, Joseph E.
2012-01-01
VML (Virtual Machine Language) has been used as the sequencing flight software on over a dozen JPL deep-space missions, most recently flying on GRAIL and JUNO. In conjunction with the NASA SBIR entitled "Reactive Rendezvous and Docking Sequencer", VML version 3.0 has been enhanced to include object-oriented element organization, built-in queuing operations, and sophisticated matrix / vector operations. These improvements allow VML scripts to easily perform much of the work that formerly would have required a great deal of expensive flight software development to realize. Autonomous turning and tracking makes considerable use of new VML features. Profiles generated by flight software are managed using object-oriented VML data constructs executed in discrete time by the VML flight software. VML vector and matrix operations provide the ability to calculate and supply quaternions to the attitude controller flight software which produces torque requests. Using VML-based attitude planning components eliminates flight software development effort, and reduces corresponding costs. In addition, the direct management of the quaternions allows turning and tracking to be tied in with sophisticated high-level VML state machines. These state machines provide autonomous management of spacecraft operations during critical tasks like a hypothetic Mars sample return rendezvous and docking. State machines created for autonomous science observations can also use this sort of attitude planning system, allowing heightened autonomy levels to reduce operations costs. VML state machines cannot be considered merely sequences - they are reactive logic constructs capable of autonomous decision making within a well-defined domain. The state machine approach enabled by VML 3.0 is progressing toward flight capability with a wide array of applicable mission activities.
NASA Astrophysics Data System (ADS)
Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba
2018-03-01
Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.
NASA Astrophysics Data System (ADS)
Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei
2014-10-01
This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.
Virtual collaborative environments: programming and controlling robotic devices remotely
NASA Astrophysics Data System (ADS)
Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.
1995-12-01
This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.
NASA Technical Reports Server (NTRS)
1973-01-01
Topics discussed include the management and processing of earth resources information, special-purpose processors for the machine processing of remotely sensed data, digital image registration by a mathematical programming technique, the use of remote-sensor data in land classification (in particular, the use of ERTS-1 multispectral scanning data), the use of remote-sensor data in geometrical transformations and mapping, earth resource measurement with the aid of ERTS-1 multispectral scanning data, the use of remote-sensor data in the classification of turbidity levels in coastal zones and in the identification of ecological anomalies, the problem of feature selection and the classification of objects in multispectral images, the estimation of proportions of certain categories of objects, and a number of special systems and techniques. Individual items are announced in this issue.
Learning to Predict Demand in a Transport-Resource Sharing Task
2015-09-01
exhaustive manner. We experimented with the scikit- learn machine- learning library for Python and a range of R packages before settling on R. We...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited LEARNING TO...COVERED Master’s thesis 4. TITLE AND SUBTITLE LEARNING TO PREDICT DEMAND IN A TRANSPORT-RESOURCE SHARING TASK 5. FUNDING NUMBERS 6. AUTHOR
2014-09-01
peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S.
2013-12-01
The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.
Geographic Data Display Implementation
1977-06-01
display to be either multiplied or divided by the magnification factor (normally 1.5). The result is a change of extent around the cursor as seen in... Products printer and a 200-card- per-minute card reader with the Interdata 4 (1-4). The 1-4 with its 64K of core is the applications machine connected...storing these values in the CURSTA array. 57 ZOOM IN FUNCTION KEY ZOOM OUT FUNCTION KEY ZMINTP ZMOUTP SET ZOOM OUT MAG FACTOR ZOMTOP SET
Speech reconstruction using a deep partially supervised neural network.
McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R
2017-08-01
Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.
BEAM DYNAMICS STUDIES OF A HIGH-REPETITION RATE LINAC-DRIVER FOR A 4TH GENERATION LIGHT SOURCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ventturini, M.; Corlett, J.; Emma, P.
2012-05-18
We present recent progress toward the design of a super-conducting linac driver for a high-repetition rate FEL-based soft x-ray light source. The machine is designed to accept beams generated by the APEX photo-cathode gun operating with MHz-range repetition rate and deliver them to an array of SASE and seeded FEL beamlines. We review the current baseline design and report results of beam dynamics studies.
A general high-speed laser drilling method for nonmetal thin material
NASA Astrophysics Data System (ADS)
Cai, Zhijian; Xu, Guangsheng; Xu, Zhou; Xu, Zhiqiang
2013-05-01
Many nonmetal film products, such as herbal plaster, medical adhesive tape and farm plastic film, require drilling dense small holes to enhance the permeability without affecting the appearance. For many medium and small enterprises, a low-cost, high-speed laser drilling machine with the ability of processing different kinds of nonmetal material is highly demanded. In this paper, we proposed a general purpose high-speed laser drilling method for micro-hole production on thin nonmetal film. The system utilizes a rotating polygonal mirror to perform high-speed laser scan, which is simpler and more efficient than the oscillating mirror scan. In this system, an array of closepacked paraboloid mirrors is mounted on the laser scan track to focus the high-power laser onto the material sheet, which could produce up to twenty holes in a single scan. The design of laser scan and focusing optics is optimized to obtain the best holes' quality, and the mirrors can be flexibly adjusted to get different drilling parameters. The use of rotating polygonal mirror scan and close-packed mirror array focusing greatly improves the drilling productivity to enable the machine producing thirty thousand holes per minute. With proper design, the hold uniformity can also get improved. In this paper, the detailed optical and mechanical design is illustrated, the high-speed laser drilling principle is introduced and the preliminary experimental results are presented.
Performance highlights of the ALMA correlators
NASA Astrophysics Data System (ADS)
Baudry, Alain; Lacasse, Richard; Escoffier, Ray; Webber, John; Greenberg, Joseph; Platt, Laurence; Treacy, Robert; Saez, Alejandro F.; Cais, Philippe; Comoretto, Giovanni; Quertier, Benjamin; Okumura, Sachiko K.; Kamazaki, Takeshi; Chikada, Yoshihiro; Watanabe, Manabu; Okuda, Takeshi; Kurono, Yasutake; Iguchi, Satoru
2012-09-01
Two large correlators have been constructed to combine the signals captured by the ALMA antennas deployed on the Atacama Desert in Chile at an elevation of 5050 meters. The Baseline correlator was fabricated by a NRAO/European team to process up to 64 antennas for 16 GHz bandwidth in two polarizations and another correlator, the Atacama Compact Array (ACA) correlator, was fabricated by a Japanese team to process up to 16 antennas. Both correlators meet the same specifications except for the number of processed antennas. The main architectural differences between these two large machines will be underlined. Selected features of the Baseline and ACA correlators as well as the main technical challenges met by the designers will be briefly discussed. The Baseline correlator is the largest correlator ever built for radio astronomy. Its digital hybrid architecture provides a wide variety of observing modes including the ability to divide each input baseband into 32 frequency-mobile sub-bands for high spectral resolution and to be operated as a conventional 'lag' correlator for high time resolution. The various observing modes offered by the ALMA correlators to the science community for 'Early Science' are presented, as well as future observing modes. Coherently phasing the array to provide VLBI maps of extremely compact sources is another feature of the ALMA correlators. Finally, the status and availability of these large machines will be presented.
Automated negotiation in environmental resource management: Review and assessment.
Eshragh, Faezeh; Pooyandeh, Majeed; Marceau, Danielle J
2015-10-01
Negotiation is an integral part of our daily life and plays an important role in resolving conflicts and facilitating human interactions. Automated negotiation, which aims at capturing the human negotiation process using artificial intelligence and machine learning techniques, is well-established in e-commerce, but its application in environmental resource management remains limited. This is due to the inherent uncertainties and complexity of environmental issues, along with the diversity of stakeholders' perspectives when dealing with these issues. The objective of this paper is to describe the main components of automated negotiation, review and compare machine learning techniques in automated negotiation, and provide a guideline for the selection of suitable methods in the particular context of stakeholders' negotiation over environmental resource issues. We advocate that automated negotiation can facilitate the involvement of stakeholders in the exploration of a plurality of solutions in order to reach a mutually satisfying agreement and contribute to informed decisions in environmental management along with the need for further studies to consolidate the potential of this modeling approach. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling the Virtual Machine Launching Overhead under Fermicloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele; Wu, Hao; Ren, Shangping
FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resourcemore » (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.« less
The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud
Karimi, Kamran; Vize, Peter D.
2014-01-01
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org PMID:25380782
Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms
NASA Astrophysics Data System (ADS)
Johansson, Niklas; Larsson, Jan-Åke
2017-09-01
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.
ERIC Educational Resources Information Center
Abeywardena, Ishan Sudeera; Tham, Choy Yoong; Raviraja, S.
2012-01-01
Open educational resources (OER) are a global phenomenon that is fast gaining credibility in many academic circles as a possible solution for bridging the knowledge divide. With increased funding and advocacy from governmental and nongovernmental organisations paired with generous philanthropy, many OER repositories, which host a vast array of…
Thomas P. Holmes
2003-01-01
In addition to commodities such as timber, forest ecosystems provide an array of goods and services that are not priced in markets but maintain, to a large degree, the characteristics of public goods (non-rivalry and non-excludability). Markets do not recognize scarcity of non-market resources and cannot be relied upon to allocate these resources to their highest and...
A multi-group and preemptable scheduling of cloud resource based on HTCondor
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan
2017-10-01
Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.