Naval Open Architecture Machinery Control Systems for Next Generation Integrated Power Systems
2012-05-01
PORTABLE) OS / RTOS ADAPTATION MIDDLEWARE (FOR OS PORTABILITY) MACHINERY CONTROLLER FRAMEWORK MACHINERY CONTROL SYSTEM SERVICES POWER CONTROL SYSTEM...SERVICES SHIP SYSTEM SERVICES TTY 0 TTY N … OPERATING SYSTEM ( OS / RTOS ) COMPUTER HARDWARE UDP IP TCP RAW DEV 0 DEV N … POWER MANAGEMENT CONTROLLER...operating systems (DOS, Windows, Linux, OS /2, QNX, SCO Unix ...) COMPUTERS: ISA compatible motherboards, workstations and portables (Compaq, Dell
Animation of finite element models and results
NASA Technical Reports Server (NTRS)
Lipman, Robert R.
1992-01-01
This is not intended as a complete review of computer hardware and software that can be used for animation of finite element models and results, but is instead a demonstration of the benefits of visualization using selected hardware and software. The role of raw computational power, graphics speed, and the use of videotape are discussed.
1992-03-16
34A Hidden U.S. Export: Higher Education ." The WashinQton Post, 16 February 1992, H1 and H4. Brandin , David H., and Michael A. Harrison. The...frequent significant technological change now occurs within the individual person’s working lifespan, life-long education is a necessity to remain...INDUSTRIAL REVOLUTION The phenomenal increase in speed and in raw power of computer processors, the shrinking size and cost of basic computing systems, the
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
A power-efficient communication system between brain-implantable devices and external computers.
Yao, Ning; Lee, Heung-No; Chang, Cheng-Chun; Sclabassi, Robert J; Sun, Mingui
2007-01-01
In this paper, we propose a power efficient communication system for linking a brain-implantable device to an external system. For battery powered implantable devices, the processor and the transmitter power should be reduced in order to both conserve battery power and reduce the health risks associated with transmission. To accomplish this, a joint source-channel coding/decoding system is devised. Low-density generator matrix (LDGM) codes are used in our system due to their low encoding complexity. The power cost for signal processing within the implantable device is greatly reduced by avoiding explicit source encoding. Raw data which is highly correlated is transmitted. At the receiver, a Markov chain source correlation model is utilized to approximate and capture the correlation of raw data. A turbo iterative receiver algorithm is designed which connects the Markov chain source model to the LDGM decoder in a turbo-iterative way. Simulation results show that the proposed system can save up to 1 to 2.5 dB on transmission power.
A Low-Power Wearable Stand-Alone Tongue Drive System for People With Severe Disabilities.
Jafari, Ali; Buswell, Nathanael; Ghovanloo, Maysam; Mohsenin, Tinoosh
2018-02-01
This paper presents a low-power stand-alone tongue drive system (sTDS) used for individuals with severe disabilities to potentially control their environment such as computer, smartphone, and wheelchair using their voluntary tongue movements. A low-power local processor is proposed, which can perform signal processing to convert raw magnetic sensor signals to user-defined commands, on the sTDS wearable headset, rather than sending all raw data out to a PC or smartphone. The proposed sTDS significantly reduces the transmitter power consumption and subsequently increases the battery life. Assuming the sTDS user issues one command every 20 ms, the proposed local processor reduces the data volume that needs to be wirelessly transmitted by a factor of 64, from 9.6 to 0.15 kb/s. The proposed processor consists of three main blocks: serial peripheral interface bus for receiving raw data from magnetic sensors, external magnetic interference attenuation to attenuate external magnetic field from the raw magnetic signal, and a machine learning classifier for command detection. A proof-of-concept prototype sTDS has been implemented with a low-power IGLOO-nano field programmable gate array (FPGA), bluetooth low energy, battery and magnetic sensors on a headset, and tested. At clock frequency of 20 MHz, the processor takes 6.6 s and consumes 27 nJ for detecting a command with a detection accuracy of 96.9%. To further reduce power consumption, an application-specified integrated circuit processor for the sTDS is implemented at the postlayout level in 65-nm CMOS technology with 1-V power supply, and it consumes 0.43 mW, which is 10 lower than FPGA power consumption and occupies an area of only 0.016 mm.
Compression of CCD raw images for digital still cameras
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania
2005-03-01
Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.
The disappearing third dimension.
Rowe, Timothy; Frank, Lawrence R
2011-02-11
Three-dimensional computing is driving what many would call a revolution in scientific visualization. However, its power and advancement are held back by the absence of sustainable archives for raw data and derivative visualizations. Funding agencies, professional societies, and publishers each have unfulfilled roles in archive design and data management policy.
Impact of computational structure-based methods on drug discovery.
Reynolds, Charles H
2014-01-01
Structure-based drug design has become an indispensible tool in drug discovery. The emergence of structure-based design is due to gains in structural biology that have provided exponential growth in the number of protein crystal structures, new computational algorithms and approaches for modeling protein-ligand interactions, and the tremendous growth of raw computer power in the last 30 years. Computer modeling and simulation have made major contributions to the discovery of many groundbreaking drugs in recent years. Examples are presented that highlight the evolution of computational structure-based design methodology, and the impact of that methodology on drug discovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.
Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities formore » enterprise-wide optimization, including planning, scheduling, and supply chain technologies.« less
Computing technology in the 1980's. [computers
NASA Technical Reports Server (NTRS)
Stone, H. S.
1978-01-01
Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.
Development of gait segmentation methods for wearable foot pressure sensors.
Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C
2012-01-01
We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Model falsifiability and climate slow modes
NASA Astrophysics Data System (ADS)
Essex, Christopher; Tsonis, Anastasios A.
2018-07-01
The most advanced climate models are actually modified meteorological models attempting to capture climate in meteorological terms. This seems a straightforward matter of raw computing power applied to large enough sources of current data. Some believe that models have succeeded in capturing climate in this manner. But have they? This paper outlines difficulties with this picture that derive from the finite representation of our computers, and the fundamental unavailability of future data instead. It suggests that alternative windows onto the multi-decadal timescales are necessary in order to overcome the issues raised for practical problems of prediction.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Measurement of the noise power spectrum in digital x-ray detectors
NASA Astrophysics Data System (ADS)
Aufrichtig, Richard; Su, Yu; Cheng, Yu; Granfors, Paul R.
2001-06-01
The noise power spectrum, NPS, is a key imaging property of a detector and one of the principle quantities needed to compute the detective quantum efficiency. NPS is measured by computing the Fourier transform of flat field images. Different measurement methods are investigated and evaluated with images obtained from an amorphous silicon flat panel x-ray imaging detector. First, the influence of fixed pattern structures is minimized by appropriate background corrections. For a given data set the effect of using different types of windowing functions is studied. Also different window sizes and amounts of overlap between windows are evaluated and compared to theoretical predictions. Results indicate that measurement error is minimized when applying overlapping Hanning windows on the raw data. Finally it is shown that radial averaging is a useful method of reducing the two-dimensional noise power spectrum to one dimension.
Compressive sensing scalp EEG signals: implementations and practical performance.
Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther
2012-11-01
Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.
NASA Astrophysics Data System (ADS)
Schaaf, Kjeld; Overeem, Ruud
2004-06-01
Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.
ORPC RivGen controller performance raw data - Igiugig 2015
McEntee, Jarlath
2015-12-18
Contains raw data for operations of Ocean Renewable Power Company (ORPC) RivGen Power System in Igiugig 2015 in Matlab data file format. Two data files capture the data and timestamps for data, including power in, voltage, rotation rate, and velocity.
Software for Testing Electroactive Structural Components
NASA Technical Reports Server (NTRS)
Moses, Robert W.; Fox, Robert L.; Dimery, Archie D.; Bryant, Robert G.; Shams, Qamar
2003-01-01
A computer program generates a graphical user interface that, in combination with its other features, facilitates the acquisition and preprocessing of experimental data on the strain response, hysteresis, and power consumption of a multilayer composite-material structural component containing one or more built-in sensor(s) and/or actuator(s) based on piezoelectric materials. This program runs in conjunction with Lab-VIEW software in a computer-controlled instrumentation system. For a test, a specimen is instrumented with appliedvoltage and current sensors and with strain gauges. Once the computational connection to the test setup has been made via the LabVIEW software, this program causes the test instrumentation to step through specified configurations. If the user is satisfied with the test results as displayed by the software, the user activates an icon on a front-panel display, causing the raw current, voltage, and strain data to be digitized and saved. The data are also put into a spreadsheet and can be plotted on a graph. Graphical displays are saved in an image file for future reference. The program also computes and displays the power and the phase angle between voltage and current.
Conversion of raw carbonaceous fuels
Cooper, John F [Oakland, CA
2007-08-07
Three configurations for an electrochemical cell are utilized to generate electric power from the reaction of oxygen or air with porous plates or particulates of carbon, arranged such that waste heat from the electrochemical cells is allowed to flow upwards through a storage chamber or port containing raw carbonaceous fuel. These configurations allow combining the separate processes of devolatilization, pyrolysis and electrochemical conversion of carbon to electric power into a single unit process, fed with raw fuel and exhausting high BTU gases, electric power, and substantially pure CO.sub.2 during operation.
Visual Analytics for Power Grid Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Huang, Zhenyu; Chen, Yousu
2014-01-20
Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
An arch-shaped intraoral tongue drive system with built-in tongue-computer interfacing SoC.
Park, Hangue; Ghovanloo, Maysam
2014-11-14
We present a new arch-shaped intraoral Tongue Drive System (iTDS) designed to occupy the buccal shelf in the user's mouth. The new arch-shaped iTDS, which will be referred to as the iTDS-2, incorporates a system-on-a-chip (SoC) that amplifies and digitizes the raw magnetic sensor data and sends it wirelessly to an external TDS universal interface (TDS-UI) via an inductive coil or a planar inverted-F antenna. A built-in transmitter (Tx) employs a dual-band radio that operates at either 27 MHz or 432 MHz band, according to the wireless link quality. A built-in super-regenerative receiver (SR-Rx) monitors the wireless link quality and switches the band if the link quality is below a predetermined threshold. An accompanying ultra-low power FPGA generates data packets for the Tx and handles digital control functions. The custom-designed TDS-UI receives raw magnetic sensor data from the iTDS-2, recognizes the intended user commands by the sensor signal processing (SSP) algorithm running in a smartphone, and delivers the classified commands to the target devices, such as a personal computer or a powered wheelchair. We evaluated the iTDS-2 prototype using center-out and maze navigation tasks on two human subjects, which proved its functionality. The subjects' performance with the iTDS-2 was improved by 22% over its predecessor, reported in our earlier publication.
Feature generation using genetic programming with application to fault classification.
Guo, Hong; Jack, Lindsay B; Nandi, Asoke K
2005-02-01
One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.
Tchebichef moment transform on image dithering for mobile applications
NASA Astrophysics Data System (ADS)
Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah
2012-04-01
Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.
NASA Astrophysics Data System (ADS)
Lakdawalla, E. S.
2008-11-01
Many recent planetary science missions, including the Mars Exploration Rovers, Cassini-Huygens, and New Horizons, have instituted a policy of the rapid release of ``raw'' images to the Internet within days or even hours of their acquisition. The availability of these data, along with the increasing power of home computers and availability of high-bandwidth Internet connections, have stimulated the development of a worldwide community of armchair planetary scientists, who are able to participate in the everyday drama of exploratory missions' encounters with new worlds and new landscapes. Far from passive onlookers, many of these enthusiasts have taught themselves image processing techniques and have even written software to perform automated processing and mosaicking of these raw data sets. They rapidly produce stunning visualizations and then post them to their own blogs or online forums, where they also engage in discussing scientific observations and inferences about the data sets, broadening missions' public outreach efforts beyond their direct reach. These amateur space scientists feel a deep sense of involvement in and connection to space missions, which makes them enthusiastic (and occasionally demanding) supporters of space exploration.
Sleep apps: what role do they play in clinical medicine?
Lorenz, Christopher P; Williams, Adrian J
2017-11-01
Today's smartphones boast more computing power than the Apollo Guidance Computer. Given the ubiquity and popularity of smartphones, are we already carrying around miniaturized sleep labs in our pockets? There is still a lack of validation studies for consumer sleep technologies in general and apps for monitoring sleep in particular. To overcome this gap, multidisciplinary teams are needed that focus on feasibility work at the intersection of software engineering, data science and clinical sleep medicine. To date, no smartphone app for monitoring sleep through movement sensors has been successfully validated against polysomnography, despite the role and validity of actigraphy in sleep medicine having been well established. Missing separation of concerns, not methodology, poses the key limiting factor: The two essential steps in the monitoring process, data collection and scoring, are chained together inside a black box due to the closed nature of consumer devices. This leaves researchers with little room for influence nor can they access raw data. Multidisciplinary teams that wield complete power over the sleep monitoring process are sorely needed.
2006-06-14
Robert Graybill . A Raw hoard for the use of this project was provided by the Computer Architecture Croup at the Massachusetts Institute of Technology...simulator is presented by MIT as being an accurate model of the Raw chip, we have found that it does not accurately model the board. Our comparison...G4 processor, model 7410. with a 32 kbyte level-1 cache on-chip and a 2 Mbyte L2 cache connected through a 250 MH/ bus [12]. Each node has 256 Mbyte
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-09-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
High-frequency ac power distribution in Space Station
NASA Technical Reports Server (NTRS)
Tsai, Fu-Sheng; Lee, Fred C. Y.
1990-01-01
A utility-type 20-kHz ac power distribution system for the Space Station, employing resonant power-conversion techniques, is presented. The system converts raw dc voltage from photovoltaic cells or three-phase LF ac voltage from a solar dynamic generator into a regulated 20-kHz ac voltage for distribution among various loads. The results of EASY5 computer simulations of the local and global performance show that the system has fast response and good transient behavior. The ac bus voltage is effectively regulated using the phase-control scheme, which is demonstrated with both line and load variations. The feasibility of paralleling the driver-module outputs is illustrated with the driver modules synchronized and sharing a common feedback loop. An HF sinusoidal ac voltage is generated in the three-phase ac input case, when the driver modules are phased 120 deg away from one another and their outputs are connected in series.
Van der Lubbe, Rob H J; Szumska, Izabela; Fajkowska, Małgorzata
2016-01-01
New analysis techniques of the electroencephalogram (EEG) such as wavelet analysis open the possibility to address questions that may largely improve our understanding of the EEG and clarify its relation with related potentials (ER Ps). Three issues were addressed. 1) To what extent can early ERERP components be described as transient evoked oscillations in specific frequency bands? 2) Total EEG power (TP) after a stimulus consists of pre-stimulus baseline power (BP), evoked power (EP), and induced power (IP), but what are their respective contributions? 3) The Phase Reset model proposes that BP predicts EP, while the evoked model holds that BP is unrelated to EP; which model is the most valid one? EEG results on NoGo trials for 123 individuals that took part in an experiment with emotional facial expressions were examined by computing ERPs and by performing wavelet analyses on the raw EEG and on ER Ps. After performing several multiple regression analyses, we obtained the following answers. First, the P1, N1, and P2 components can by and large be described as transient oscillations in the α and θ bands. Secondly, it appears possible to estimate the separate contributions of EP, BP, and IP to TP, and importantly, the contribution of IP is mostly larger than that of EP. Finally, no strong support was obtained for either the Phase Reset or the Evoked model. Recent models are discussed that may better explain the relation between raw EEG and ERPs.
Van der Lubbe, Rob H. J.; Szumska, Izabela; Fajkowska, Małgorzata
2016-01-01
New analysis techniques of the electroencephalogram (EEG) such as wavelet analysis open the possibility to address questions that may largely improve our understanding of the EEG and clarify its relation with related potentials (ER Ps). Three issues were addressed. 1) To what extent can early ERERP components be described as transient evoked oscillations in specific frequency bands? 2) Total EEG power (TP) after a stimulus consists of pre-stimulus baseline power (BP), evoked power (EP), and induced power (IP), but what are their respective contributions? 3) The Phase Reset model proposes that BP predicts EP, while the evoked model holds that BP is unrelated to EP; which model is the most valid one? EEG results on NoGo trials for 123 individuals that took part in an experiment with emotional facial expressions were examined by computing ERPs and by performing wavelet analyses on the raw EEG and on ER Ps. After performing several multiple regression analyses, we obtained the following answers. First, the P1, N1, and P2 components can by and large be described as transient oscillations in the α and θ bands. Secondly, it appears possible to estimate the separate contributions of EP, BP, and IP to TP, and importantly, the contribution of IP is mostly larger than that of EP. Finally, no strong support was obtained for either the Phase Reset or the Evoked model. Recent models are discussed that may better explain the relation between raw EEG and ERPs. PMID:28154612
3D data processing with advanced computer graphics tools
NASA Astrophysics Data System (ADS)
Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott
2012-09-01
Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.
Shipboard Aggregate Power Monitoring
2009-06-01
low pressure air serves to operate various valves and provide pneumatic power for certain plant equipment. The compressor is an Ingersoll-Rand NAXI...List of Figures Figure 1-1: Raw AC voltage and current measurements for recorded during a motor start-up. (1...filters, valves, etc.) of a given system. Figure 1-1: Raw AC voltage and current measurements recorded during a motor start-up. (1) Figure
Deep learning with convolutional neural networks for EEG decoding and visualization
Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-01-01
Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865
Deep learning with convolutional neural networks for EEG decoding and visualization.
Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-11-01
Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
6. Photocopied August 1978. LINEUP OF HORRY ROTARY FURNACES ON ...
6. Photocopied August 1978. LINE-UP OF HORRY ROTARY FURNACES ON THE SECOND FLOOR OF THE MICHIGAN LAKE SUPERIOR POWER COMPANY POWER HOUSE. THE HOPPERS WHICH FED THE RAW MATERIALS INTO THE FURNACES ARE SHOWN ABOVE THE FURNACES. AS THE 'SPOOL' OF THE FURNACE ROTATED PAST THE ELECTRODES PLATES WERE ADDED TO HOLD THE FINISHED PRODUCT AND THE DESCENDING RAW MATERIALS IN PLACE. THE DIRECTION OF ROTATION OF THE FURNACES SHOWN IN THIS PHOTO IS CLOCKWISE, (M). - Michigan Lake Superior Power Company, Portage Street, Sault Ste. Marie, Chippewa County, MI
Petabyte Class Storage at Jefferson Lab (CEBAF)
NASA Technical Reports Server (NTRS)
Chambers, Rita; Davis, Mark
1996-01-01
By 1997, the Thomas Jefferson National Accelerator Facility will collect over one Terabyte of raw information per day of Accelerator operation from three concurrently operating Experimental Halls. When post-processing is included, roughly 250 TB of raw and formatted experimental data will be generated each year. By the year 2000, a total of one Petabyte will be stored on-line. Critical to the experimental program at Jefferson Lab (JLab) is the networking and computational capability to collect, store, retrieve, and reconstruct data on this scale. The design criteria include support of a raw data stream of 10-12 MB/second from Experimental Hall B, which will operate the CEBAF (Continuous Electron Beam Accelerator Facility) Large Acceptance Spectrometer (CLAS). Keeping up with this data stream implies design strategies that provide storage guarantees during accelerator operation, minimize the number of times data is buffered allow seamless access to specific data sets for the researcher, synchronize data retrievals with the scheduling of postprocessing calculations on the data reconstruction CPU farms, as well as support the site capability to perform data reconstruction and reduction at the same overall rate at which new data is being collected. The current implementation employs state-of-the-art StorageTek Redwood tape drives and robotics library integrated with the Open Storage Manager (OSM) Hierarchical Storage Management software (Computer Associates, International), the use of Fibre Channel RAID disks dual-ported between Sun Microsystems SMP servers, and a network-based interface to a 10,000 SPECint92 data processing CPU farm. Issues of efficiency, scalability, and manageability will become critical to meet the year 2000 requirements for a Petabyte of near-line storage interfaced to over 30,000 SPECint92 of data processing power.
Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A
2012-04-01
A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Abstract Machines for Polymorphous Computing
2007-12-01
s/ /s/ MARK NOVAK WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division Information...models and LLCs have been developed for Raw, MONARCH [18][19], TRIPS [20][21], and Smart Memories [22][23]. These research projects were conducted...used here. In our approach on Raw, two key concepts are used to fully leverage the Raw architecture [34]. First, the tile grid is viewed as a
NASA Technical Reports Server (NTRS)
Simoneau, Robert J.; Strazisar, Anthony J.; Sockol, Peter M.; Reid, Lonnie; Adamczyk, John J.
1987-01-01
The discipline research in turbomachinery, which is directed toward building the tools needed to understand such a complex flow phenomenon, is based on the fact that flow in turbomachinery is fundamentally unsteady or time dependent. Success in building a reliable inventory of analytic and experimental tools will depend on how the time and time-averages are treated, as well as on who the space and space-averages are treated. The raw tools at disposal (both experimentally and computational) are truly powerful and their numbers are growing at a staggering pace. As a result of this power, a case can be made that a situation exists where information is outstripping understanding. The challenge is to develop a set of computational and experimental tools which genuinely increase understanding of the fluid flow and heat transfer in a turbomachine. Viewgraphs outline a philosophy based on working on a stairstep hierarchy of mathematical and experimental complexity to build a system of tools, which enable one to aggressively design the turbomachinery of the next century. Examples of the types of computational and experimental tools under current development at Lewis, with progress to date, are examined. The examples include work in both the time-resolved and time-averaged domains. Finally, an attempt is made to identify the proper place for Lewis in this continuum of research.
Robust shot-noise measurement for continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Kunz-Jacques, Sébastien; Jouguet, Paul
2015-02-01
We study a practical method to measure the shot noise in real time in continuous-variable quantum key distribution systems. The amount of secret key that can be extracted from the raw statistics depends strongly on this quantity since it affects in particular the computation of the excess noise (i.e., noise in excess of the shot noise) added by an eavesdropper on the quantum channel. Some powerful quantum hacking attacks relying on faking the estimated value of the shot noise to hide an intercept and resend strategy were proposed. Here, we provide experimental evidence that our method can defeat the saturation attack and the wavelength attack.
A Study about Kalman Filters Applied to Embedded Sensors
Valade, Aurélien; Acco, Pascal; Grabolosa, Pierre; Fourniols, Jean-Yves
2017-01-01
Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate. PMID:29206187
Addressing the challenges of standalone multi-core simulations in molecular dynamics
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-07-01
Computational modelling in material science involves mathematical abstractions of force fields between particles with the aim to postulate, develop and understand materials by simulation. The aggregated pairwise interactions of the material's particles lead to a deduction of its macroscopic behaviours. For practically meaningful macroscopic scales, a large amount of data are generated, leading to vast execution times. Simulation times of hours, days or weeks for moderately sized problems are not uncommon. The reduction of simulation times, improved result accuracy and the associated software and hardware engineering challenges are the main motivations for many of the ongoing researches in the computational sciences. This contribution is concerned mainly with simulations that can be done on a "standalone" computer based on Message Passing Interfaces (MPI), parallel code running on hardware platforms with wide specifications, such as single/multi- processor, multi-core machines with minimal reconfiguration for upward scaling of computational power. The widely available, documented and standardized MPI library provides this functionality through the MPI_Comm_size (), MPI_Comm_rank () and MPI_Reduce () functions. A survey of the literature shows that relatively little is written with respect to the efficient extraction of the inherent computational power in a cluster. In this work, we discuss the main avenues available to tap into this extra power without compromising computational accuracy. We also present methods to overcome the high inertia encountered in single-node-based computational molecular dynamics. We begin by surveying the current state of the art and discuss what it takes to achieve parallelism, efficiency and enhanced computational accuracy through program threads and message passing interfaces. Several code illustrations are given. The pros and cons of writing raw code as opposed to using heuristic, third-party code are also discussed. The growing trend towards graphical processor units and virtual computing clouds for high-performance computing is also discussed. Finally, we present the comparative results of vacancy formation energy calculations using our own parallelized standalone code called Verlet-Stormer velocity (VSV) operating on 30,000 copper atoms. The code is based on the Sutton-Chen implementation of the Finnis-Sinclair pairwise embedded atom potential. A link to the code is also given.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
Harnessing the Power of Light to See and Treat Breast Cancer
2015-12-01
complexity : the raw data, the most basic form of data, represents the raw numeric readout obtained from the acquisition hardware. The raw data...has the added advantage of full-field illumination and non-descanned detection, thus lowering the complexity compared to confocal scanning systems... complexity of images that have varying levels of contrast and non-uniform background heterogeneity. In 2004 Matas described a technique for detecting
Parallel Computing for the Computed-Tomography Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2008-01-01
This software computes the tomographic reconstruction of spatial-spectral data from raw detector images of the Computed-Tomography Imaging Spectrometer (CTIS), which enables transient-level, multi-spectral imaging by capturing spatial and spectral information in a single snapshot.
Attitude determination for small satellites using GPS signal-to-noise ratio
NASA Astrophysics Data System (ADS)
Peters, Daniel
An embedded system for GPS-based attitude determination (AD) using signal-to-noise (SNR) measurements was developed for CubeSat applications. The design serves as an evaluation testbed for conducting ground based experiments using various computational methods and antenna types to determine the optimum AD accuracy. Raw GPS data is also stored to non-volatile memory for downloading and post analysis. Two low-power microcontrollers are used for processing and to display information on a graphic screen for real-time performance evaluations. A new parallel inter-processor communication protocol was developed that is faster and uses less power than existing standard protocols. A shorted annular patch (SAP) antenna was fabricated for the initial ground-based AD experiments with the testbed. Static AD estimations with RMS errors in the range of 2.5° to 4.8° were achieved over a range of off-zenith attitudes.
Analysis of HD 73045 light curve data
NASA Astrophysics Data System (ADS)
Das, Mrinal Kanti; Bhatraju, Naveen Kumar; Joshi, Santosh
2018-04-01
In this work we analyzed the Kepler light curve data of HD 73045. The raw data has been smoothened using standard filters. The power spectrum has been obtained by using a fast Fourier transform routine. It shows the presence of more than one period. In order to take care of any non-stationary behavior, we carried out a wavelet analysis to obtain the wavelet power spectrum. In addition, to identify the scale invariant structure, the data has been analyzed using a multifractal detrended fluctuation analysis. Further to characterize the diversity of embedded patterns in the HD 73045 flux time series, we computed various entropy-based complexity measures e.g. sample entropy, spectral entropy and permutation entropy. The presence of periodic structure in the time series was further analyzed using the visibility network and horizontal visibility network model of the time series. The degree distributions in the two network models confirm such structures.
Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis.
Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E
2018-04-15
Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele. darrell.hurt@nih.gov.
AMRITA -- A computational facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepherd, J.E.; Quirk, J.J.
1998-02-23
Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generatesmore » the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.« less
Nephele: a cloud platform for simplified, standardized and reproducible microbiome data analysis
Weber, Nick; Liou, David; Dommer, Jennifer; MacMenamin, Philip; Quiñones, Mariam; Misner, Ian; Oler, Andrew J; Wan, Joe; Kim, Lewis; Coakley McCarthy, Meghan; Ezeji, Samuel; Noble, Karlynn; Hurt, Darrell E
2018-01-01
Abstract Motivation Widespread interest in the study of the microbiome has resulted in data proliferation and the development of powerful computational tools. However, many scientific researchers lack the time, training, or infrastructure to work with large datasets or to install and use command line tools. Results The National Institute of Allergy and Infectious Diseases (NIAID) has created Nephele, a cloud-based microbiome data analysis platform with standardized pipelines and a simple web interface for transforming raw data into biological insights. Nephele integrates common microbiome analysis tools as well as valuable reference datasets like the healthy human subjects cohort of the Human Microbiome Project (HMP). Nephele is built on the Amazon Web Services cloud, which provides centralized and automated storage and compute capacity, thereby reducing the burden on researchers and their institutions. Availability and implementation https://nephele.niaid.nih.gov and https://github.com/niaid/Nephele Contact darrell.hurt@nih.gov PMID:29028892
CT radiation profile width measurement using CR imaging plate raw data
Yang, Chang‐Ying Joseph
2015-01-01
This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559
NASA Astrophysics Data System (ADS)
Zack, J. W.
2015-12-01
Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble, which is a case-matching scheme. The presentation will provide (1) an overview of each method and the experimental design, (2) performance comparisons based on standard metrics such as bias, MAE and RMSE, (3) a summary of the performance characteristics of each approach and (4) a preview of further experiments to be conducted.
Automated Production of Movies on a Cluster of Computers
NASA Technical Reports Server (NTRS)
Nail, Jasper; Le, Duong; Nail, William L.; Nail, William
2008-01-01
A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.
A new method to cluster genomes based on cumulative Fourier power spectrum.
Dong, Rui; Zhu, Ziyue; Yin, Changchuan; He, Rong L; Yau, Stephen S-T
2018-06-20
Analyzing phylogenetic relationships using mathematical methods has always been of importance in bioinformatics. Quantitative research may interpret the raw biological data in a precise way. Multiple Sequence Alignment (MSA) is used frequently to analyze biological evolutions, but is very time-consuming. When the scale of data is large, alignment methods cannot finish calculation in reasonable time. Therefore, we present a new method using moments of cumulative Fourier power spectrum in clustering the DNA sequences. Each sequence is translated into a vector in Euclidean space. Distances between the vectors can reflect the relationships between sequences. The mapping between the spectra and moment vector is one-to-one, which means that no information is lost in the power spectra during the calculation. We cluster and classify several datasets including Influenza A, primates, and human rhinovirus (HRV) datasets to build up the phylogenetic trees. Results show that the new proposed cumulative Fourier power spectrum is much faster and more accurately than MSA and another alignment-free method known as k-mer. The research provides us new insights in the study of phylogeny, evolution, and efficient DNA comparison algorithms for large genomes. The computer programs of the cumulative Fourier power spectrum are available at GitHub (https://github.com/YaulabTsinghua/cumulative-Fourier-power-spectrum). Copyright © 2018. Published by Elsevier B.V.
40 CFR 89.404 - Test procedure overview.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engine operating conditions to be conducted on an engine dynamometer. The exhaust gases, generated raw or... matter. For more information on particulate matter sampling see § 89.112(c). The test cycles consist of... (raw analysis), and the power output during each mode. The measured values are weighted and used to...
B-MIC: An Ultrafast Three-Level Parallel Sequence Aligner Using MIC.
Cui, Yingbo; Liao, Xiangke; Zhu, Xiaoqian; Wang, Bingqiang; Peng, Shaoliang
2016-03-01
Sequence alignment is the central process for sequence analysis, where mapping raw sequencing data to reference genome. The large amount of data generated by NGS is far beyond the process capabilities of existing alignment tools. Consequently, sequence alignment becomes the bottleneck of sequence analysis. Intensive computing power is required to address this challenge. Intel recently announced the MIC coprocessor, which can provide massive computing power. The Tianhe-2 is the world's fastest supercomputer now equipped with three MIC coprocessors each compute node. A key feature of sequence alignment is that different reads are independent. Considering this property, we proposed a MIC-oriented three-level parallelization strategy to speed up BWA, a widely used sequence alignment tool, and developed our ultrafast parallel sequence aligner: B-MIC. B-MIC contains three levels of parallelization: firstly, parallelization of data IO and reads alignment by a three-stage parallel pipeline; secondly, parallelization enabled by MIC coprocessor technology; thirdly, inter-node parallelization implemented by MPI. In this paper, we demonstrate that B-MIC outperforms BWA by a combination of those techniques using Inspur NF5280M server and the Tianhe-2 supercomputer. To the best of our knowledge, B-MIC is the first sequence alignment tool to run on Intel MIC and it can achieve more than fivefold speedup over the original BWA while maintaining the alignment precision.
Sub-Second Parallel State Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.
This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less
Operation of the Australian Store.Synchrotron for macromolecular crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.
2014-10-01
The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Severalmore » real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.« less
Simulation of fuel demand for wood-gas in combustion engine
NASA Astrophysics Data System (ADS)
Botwinska, Katarzyna; Mruk, Remigiusz; Tucki, Karol; Wata, Mateusz
2017-10-01
In the era of the oil crisis and proceeding contamination of the natural environment, it is attempted to substitute fossil raw materials with alternative carriers. For many years, road transport has been considered as one of the main sources of the substances deteriorating air quality. Applicable European directives oblige the member states to implement biofuels and biocomponents into the general fuel market, however, such process is proceeding gradually and relatively slowly. So far, alternative fuels have been used on a large scale to substitute diesel fuel or petrol. Derivatives of vegetable raw materials, such as vegetable oils or their esters and ethanol extracted from biomass, are used to that end. It has been noticed that there is no alternative to LPG which, due to financial reasons, is more and more popular as fuel in passenger cars. In relation to solutions adopted in the past, it has been decided to analyse the option of powering a modern passenger car with wood gas - syngas. Such fuel has been practically used since the 1920's. To that end, a computer simulation created in SciLab environment was carried out. Passenger car Fiat Seicento, fitted with Fire 1.1 8V petrol engine with power of 40kW, whose parameters were used to prepare the model, was selected as the model vehicle. The simulation allows the determination of engine demand on the given fuel. Apart from the wood gas included in the title, petrol, methane and LPG were used. Additionally, the created model enables the determination of the engine power at the time of the indicated fuels supply. The results obtained in the simulation revealed considerable decrease in the engine power when the wood gas was supplied and the increased consumption of this fuel. On the basis of the analysis of the professional literature describing numerous inconveniences connected with the use of this fuel as well as the obtained results, it has been established that using the wood gas as alternative fuel is currently unjustified.
Computational Challenges in Processing the Q1-Q16 Kepler Data Set
NASA Astrophysics Data System (ADS)
Klaus, Todd C.; Henze, C.; Twicken, J. D.; Hall, J.; McCauliff, S. D.; Girouard, F.; Cote, M.; Morris, R. L.; Clarke, B.; Jenkins, J. M.; Caldwell, D.; Kepler Science Operations Center
2013-10-01
Since launch on March 6th, 2009, NASA’s Kepler Space Telescope has collected 48 months of data on over 195,000 targets. The raw data are rife with instrumental and astrophysical noise that must be removed in order to detect and model the transit-like signals present in the data. Calibrating the raw pixels, generating and correcting the flux light curves, and detecting and characterizing the signals require significant computational power. In addition, the algorithms that make up the Kepler Science Pipeline and their parameters are still undergoing changes (most of which increase the computational cost), creating the need to reprocess the entire data set on a regular basis. We discuss how we have ported all of the core elements of the pipeline to the Pleiades cluster at the NASA Advanced Supercomputing (NAS) Division, the needs driving the port, and the technical challenges we faced. In 2011 we ported the Transiting Planet Search (TPS) and Data Validation (DV) modules to Pleiades. These pipeline modules operate on the full data set and the computational complexity increases roughly by the square of the number of data points. At the time of the port it had become infeasible to run these modules on our local hardware, necessitating the move to Pleiades. In 2012 and 2013 we turned our attention to the front end of the pipeline; Pixel-level Calibration (CAL), Photometric Analysis (PA), and Pre-Search Data Conditioning (PDC). Porting these modules to Pleiades will allow us to reprocess the complete data set on a more frequent basis. The last time we reprocessed all data for the front end we only had 24 months of data. We estimate that the full 48-month data set would take over 200 days to complete on local hardware. When the port is complete we expect to reprocess this data set on Pleiades in about a month. The NASA Science Mission Directorate provided funding for the Kepler Mission.
NASA Astrophysics Data System (ADS)
Cook, Perry R.
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).
Crotta, Matteo; Paterlini, Franco; Rizzi, Rita; Guitian, Javier
2016-02-01
Foodborne disease as a result of raw milk consumption is an increasing concern in Western countries. Quantitative microbial risk assessment models have been used to estimate the risk of illness due to different pathogens in raw milk. In these models, the duration and temperature of storage before consumption have a critical influence in the final outcome of the simulations and are usually described and modeled as independent distributions in the consumer phase module. We hypothesize that this assumption can result in the computation, during simulations, of extreme scenarios that ultimately lead to an overestimation of the risk. In this study, a sensorial analysis was conducted to replicate consumers' behavior. The results of the analysis were used to establish, by means of a logistic model, the relationship between time-temperature combinations and the probability that a serving of raw milk is actually consumed. To assess our hypothesis, 2 recently published quantitative microbial risk assessment models quantifying the risks of listeriosis and salmonellosis related to the consumption of raw milk were implemented. First, the default settings described in the publications were kept; second, the likelihood of consumption as a function of the length and temperature of storage was included. When results were compared, the density of computed extreme scenarios decreased significantly in the modified model; consequently, the probability of illness and the expected number of cases per year also decreased. Reductions of 11.6 and 12.7% in the proportion of computed scenarios in which a contaminated milk serving was consumed were observed for the first and the second study, respectively. Our results confirm that overlooking the time-temperature dependency may yield to an important overestimation of the risk. Furthermore, we provide estimates of this dependency that could easily be implemented in future quantitative microbial risk assessment models of raw milk pathogens. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Power transformation for enhancing responsiveness of quality of life questionnaire.
Zhou, YanYan Ange
2015-01-01
We investigate the effect of power transformation of raw scores on the responsiveness of quality of life survey. The procedure maximizes the paired t-test value on the power transformed data to obtain an optimal power range. The parallel between the Box-Cox transformation is also investigated for the quality of life data.
Ebihara, Akira; Tanaka, Yuichi; Konno, Takehiko; Kawasaki, Shingo; Fujiwara, Michiyuki; Watanabe, Eiju
2013-10-01
The diagnosis and medical treatment of cerebral ischemia are becoming more important due to the increase in the prevalence of cerebrovascular disease. However, conventional methods of evaluating cerebral perfusion have several drawbacks: they are invasive, require physical restraint, and the equipment is not portable, which makes repeated measurements at the bedside difficult. An alternative method is developed using near-infrared spectroscopy (NIRS). NIRS signals are measured at 44 positions (22 on each side) on the fronto-temporal areas in 20 patients with cerebral ischemia. In order to extract the pulse-wave component, the raw total hemoglobin data recorded from each position are band-pass filtered (0.8 to 2.0 Hz) and subjected to a fast Fourier transform to obtain the power spectrum of the pulse wave. The ischemic region is determined by single-photon emission computed tomography. The pulse-wave power in the ischemic region is compared with that in the symmetrical region on the contralateral side. In 17 cases (85%), the pulse-wave power on the ischemic side is significantly lower than that on the contralateral side, which indicates that the transmission of the pulse wave is attenuated in the region with reduced blood flow. Pulse-wave power might be useful as a noninvasive marker of cerebral ischemia.
Real-Time and Secure Wireless Health Monitoring
Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.
2008-01-01
We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866
Wang, Duolin; Zeng, Shuai; Xu, Chunhui; Qiu, Wangren; Liang, Yanchun; Joshi, Trupti; Xu, Dong
2017-12-15
Computational methods for phosphorylation site prediction play important roles in protein function studies and experimental design. Most existing methods are based on feature extraction, which may result in incomplete or biased features. Deep learning as the cutting-edge machine learning method has the ability to automatically discover complex representations of phosphorylation patterns from the raw sequences, and hence it provides a powerful tool for improvement of phosphorylation site prediction. We present MusiteDeep, the first deep-learning framework for predicting general and kinase-specific phosphorylation sites. MusiteDeep takes raw sequence data as input and uses convolutional neural networks with a novel two-dimensional attention mechanism. It achieves over a 50% relative improvement in the area under the precision-recall curve in general phosphorylation site prediction and obtains competitive results in kinase-specific prediction compared to other well-known tools on the benchmark data. MusiteDeep is provided as an open-source tool available at https://github.com/duolinwang/MusiteDeep. xudong@missouri.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Typification of cider brandy on the basis of cider used in its manufacture.
Rodríguez Madrera, Roberto; Mangas Alonso, Juan J
2005-04-20
A study of typification of cider brandies on the basis of the origin of the raw material used in their manufacture was conducted using chemometric techniques (principal component analysis, linear discriminant analysis, and Bayesian analysis) together with their composition in volatile compounds, as analyzed by gas chromatography with flame ionization to detect the major volatiles and by mass spectrometric to detect the minor ones. Significant principal components computed by a double cross-validation procedure allowed the structure of the database to be visualized as a function of the raw material, that is, cider made from fresh apple juice versus cider made from apple juice concentrate. Feasible and robust discriminant rules were computed and validated by a cross-validation procedure that allowed the authors to classify fresh and concentrate cider brandies, obtaining classification hits of >92%. The most discriminating variables for typifying cider brandies according to their raw material were 1-butanol and ethyl hexanoate.
NASA Astrophysics Data System (ADS)
Vontobel, G.; Schelders, C.; Real, M.
A flux analyzing system (F.A.S.) was installed at the central receiver system of the SSPS project to determine the relative flux distribution of the heliostat field and to measure the entire optical solar flux reflected from the heliostat field into the receiver cavity. The functional principles of the F.A.S. are described. The raw data and the evaluation of the measurements of the entire helistat field are given, and an approach to determine the actual fluxes which hit the receiver tube bundle is presented. A method is described to qualify the performance of each heliostat using a computer code. The data of the measurements of the direct radiation are presented.
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2013 CFR
2013-01-01
... its license application for a geologic repository, the NRC shall make available no later than thirty... privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer... discrepancies; (ii) Gauge, meter and computer settings; (iii) Probe locations; (iv) Logging intervals and rates...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2014 CFR
2014-01-01
... its license application for a geologic repository, the NRC shall make available no later than thirty... privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer... discrepancies; (ii) Gauge, meter and computer settings; (iii) Probe locations; (iv) Logging intervals and rates...
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Measurement of heart sounds with EMFi transducer.
Kärki, Satu; Kääriäinen, Minna; Lekkala, Jukka
2007-01-01
A measurement system for heart sounds was implemented by using ElectroMechanical Film (EMFi). Heart sounds are produced by the vibrations of the cardiac structure. An EMFi transducer attached to the skin of the chest wall converts these mechanical vibrations into an electrical signal. Furthermore, the signal is amplified and transmitted to the computer. The data is analyzed with Matlab software. The low-frequency components of the measured signal (respiration and pulsation of the heart) are filtered out as well as the 50 Hz noise. Also the power spectral density (PSD) plot is computed. In test measurements, the signal was measured with respiration and by holding breath. From the filtered signal, the first (S1) and the second (S2) heart sound can be clearly seen in both cases. In addition, from the raw data signals the respiration frequency and the heart rate can be determined. In future applications, with the EMFi material it is possible to implement a plaster-like transducer measuring vital signals.
Analysis of Protein Kinetics Using Fluorescence Recovery After Photobleaching (FRAP).
Giakoumakis, Nickolaos Nikiforos; Rapsomaniki, Maria Anna; Lygerou, Zoi
2017-01-01
Fluorescence recovery after photobleaching (FRAP) is a cutting-edge live-cell functional imaging technique that enables the exploration of protein dynamics in individual cells and thus permits the elucidation of protein mobility, function, and interactions at a single-cell level. During a typical FRAP experiment, fluorescent molecules in a defined region of interest within the cell are bleached by a short and powerful laser pulse, while the recovery of the fluorescence in the region is monitored over time by time-lapse microscopy. FRAP experimental setup and image acquisition involve a number of steps that need to be carefully executed to avoid technical artifacts. Equally important is the subsequent computational analysis of FRAP raw data, to derive quantitative information on protein diffusion and binding parameters. Here we present an integrated in vivo and in silico protocol for the analysis of protein kinetics using FRAP. We focus on the most commonly encountered challenges and technical or computational pitfalls and their troubleshooting so that valid and robust insight into protein dynamics within living cells is gained.
Event Reconstruction for Many-core Architectures using Java
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Norman A.; /SLAC
Although Moore's Law remains technically valid, the performance enhancements in computing which traditionally resulted from increased CPU speeds ended years ago. Chip manufacturers have chosen to increase the number of core CPUs per chip instead of increasing clock speed. Unfortunately, these extra CPUs do not automatically result in improvements in simulation or reconstruction times. To take advantage of this extra computing power requires changing how software is written. Event reconstruction is globally serial, in the sense that raw data has to be unpacked first, channels have to be clustered to produce hits before those hits are identified as belonging tomore » a track or shower, tracks have to be found and fit before they are vertexed, etc. However, many of the individual procedures along the reconstruction chain are intrinsically independent and are perfect candidates for optimization using multi-core architecture. Threading is perhaps the simplest approach to parallelizing a program and Java includes a powerful threading facility built into the language. We have developed a fast and flexible reconstruction package (org.lcsim) written in Java that has been used for numerous physics and detector optimization studies. In this paper we present the results of our studies on optimizing the performance of this toolkit using multiple threads on many-core architectures.« less
A web portal for hydrodynamical, cosmological simulations
NASA Astrophysics Data System (ADS)
Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.
2017-07-01
This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.
The utilization of nonterrestrial materials. [resources for solar power satellite construction
NASA Technical Reports Server (NTRS)
1981-01-01
The development of research and technology programs on the user of nonterrestrial materials for space applications was considered with emphasis on the space power satellite system as a model of large space systems for which the use of nonterrestrial materials may be economically viable. Sample topics discussed include the mining of raw materials and the conversion of raw materials into useful products. These topics were considered against a background of the comparative costs of using terrestrial materials. Exploratory activities involved in the preparation of a nonterrestrial materials utilization program, and the human factors involved were also addressed. Several recommendations from the workshop are now incorporated in NASA activities.
Computational Electrocardiography: Revisiting Holter ECG Monitoring.
Deserno, Thomas M; Marx, Nikolaus
2016-08-05
Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.
The RNASeq-er API-a gateway to systematically updated analysis of public RNA-seq data.
Petryszak, Robert; Fonseca, Nuno A; Füllgrabe, Anja; Huerta, Laura; Keays, Maria; Tang, Y Amy; Brazma, Alvis
2017-07-15
The exponential growth of publicly available RNA-sequencing (RNA-Seq) data poses an increasing challenge to researchers wishing to discover, analyse and store such data, particularly those based in institutions with limited computational resources. EMBL-EBI is in an ideal position to address these challenges and to allow the scientific community easy access to not just raw, but also processed RNA-Seq data. We present a Web service to access the results of a systematically and continually updated standardized alignment as well as gene and exon expression quantification of all public bulk (and in the near future also single-cell) RNA-Seq runs in 264 species in European Nucleotide Archive, using Representational State Transfer. The RNASeq-er API (Application Programming Interface) enables ontology-powered search for and retrieval of CRAM, bigwig and bedGraph files, gene and exon expression quantification matrices (Fragments Per Kilobase Of Exon Per Million Fragments Mapped, Transcripts Per Million, raw counts) as well as sample attributes annotated with ontology terms. To date over 270 00 RNA-Seq runs in nearly 10 000 studies (1PB of raw FASTQ data) in 264 species in ENA have been processed and made available via the API. The RNASeq-er API can be accessed at http://www.ebi.ac.uk/fg/rnaseq/api . The commands used to analyse the data are available in supplementary materials and at https://github.com/nunofonseca/irap/wiki/iRAP-single-library . rnaseq@ebi.ac.uk ; rpetry@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test
NASA Technical Reports Server (NTRS)
Hamley, John A.
1989-01-01
A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Deep Learning: A Primer for Radiologists.
Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An
2017-01-01
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-01-01
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363
High Performance GPU-Based Fourier Volume Rendering.
Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr
2015-01-01
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its (N (2)logN) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are (N (3)) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-03-11
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.
Use of a prototype pulse oximeter for time series analysis of heart rate variability
NASA Astrophysics Data System (ADS)
González, Erika; López, Jehú; Hautefeuille, Mathieu; Velázquez, Víctor; Del Moral, Jésica
2015-05-01
This work presents the development of a low cost pulse oximeter prototype consisting of pulsed red and infrared commercial LEDs and a broad spectral photodetector used to register time series of heart rate and oxygen saturation of blood. This platform, besides providing these values, like any other pulse oximeter, processes the signals to compute a power spectrum analysis of the patient heart rate variability in real time and, additionally, the device allows access to all raw and analyzed data if databases construction is required or another kind of further analysis is desired. Since the prototype is capable of acquiring data for long periods of time, it is suitable for collecting data in real life activities, enabling the development of future wearable applications.
Automated Analysis, Classification, and Display of Waveforms
NASA Technical Reports Server (NTRS)
Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff
2004-01-01
A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.
2012-01-01
orbit stupendously large orbital power plants—kilometers across—which collect the sun’s raw energy and beam it down to where it is needed on the earth...24-hour, large -scale power to the urban centers where the majority of humanity lives. A network of thousands of solar-power satellites (SPS) could...provide all the power required for an Earth-based population as large as 10 billion people, even for a fully developed “first world” lifestyle but
COMPUTER PROGRAM FOR CALCULATING THE COST OF DRINKING WATER TREATMENT SYSTEMS
This FORTRAN computer program calculates the construction and operation/maintenance costs for 45 centralized unit treatment processes for water supply. The calculated costs are based on various design parameters and raw water quality. These cost data are applicable to small size ...
JPRS Report, Science & Technology China: Energy
1992-10-26
The Xiaolongtan power plant is located at the Xiaolongtan open-cut coal mine and uses its coal directly from the conveyer belt. The first...which has resulted in high coal consumption, large power use by the plants, and low full-staff labor productivity and economic results. Examine coal ...consuming an additional 70 million tons-plus of raw coal . Examine the power used at power plants. The efficiency of the blowers, water pumps,
A concept to standardize raw biosignal transmission for brain-computer interfaces.
Breitwieser, Christian; Neuper, Christa; Müller-Putz, Gernot R
2011-01-01
With this concept we introduced the attempt of a standardized interface called TiA to transmit raw biosignals. TiA is able to deal with multirate and block-oriented data transmission. Data is distinguished by different signal types (e.g., EEG, EOG, NIRS, …), whereby those signals can be acquired at the same time from different acquisition devices. TiA is built as a client-server model. Multiple clients can connect to one server. Information is exchanged via a control- and a separated data connection. Control commands and meta information are transmitted over the control connection. Raw biosignal data is delivered using the data connection in a unidirectional way. For this purpose a standardized handshaking protocol and raw data packet have been developed. Thus, an abstraction layer between hardware devices and data processing was evolved facilitating standardization.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
Computer Intelligence: Unlimited and Untapped.
ERIC Educational Resources Information Center
Staples, Betsy
1983-01-01
Herbert Simon (Nobel prize-winning economist/professor) expresses his views on human and artificial intelligence, problem solving, inventing concepts, and the future. Includes comments on expert systems, state of the art in artificial intelligence, robotics, and "Bacon," a computer program that finds scientific laws hidden in raw data.…
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2011 CFR
2011-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2012 CFR
2012-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
A Communications Modeling System for Swarm-Based Sensors
2003-09-01
6-10 6.6. Digital and Swarm System Performance Measures . . . . . . . . . . 6-21 7.1. Simulation computing hardware...detection and monitoring, and advances in computational capabilities have provided for embedded data analysis and the generation of information from raw... computing and manufacturing technology have made such systems possible. In order to harness this potential for Air Force applica- tions, a method of
Computer Vision Hardware System for Automating Rough Mills of Furniture Plants
Richard W. Conners; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; D. Earl Kline; C.J. Gatchell
1990-01-01
The rough mill of a hardwood furniture or fixture plant is the place where dried lumber is cut into the rough parts that will be used in the rest of the manufacturing process. Approximately a third of the cost of operating the rough mill is the cost of the raw material. Hence any increase in the number of rough parts produced from a given volume of raw material can...
Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing
NASA Astrophysics Data System (ADS)
Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.
2011-12-01
Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.
Partition-based acquisition model for speed up navigated beta-probe surface imaging
NASA Astrophysics Data System (ADS)
Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre
2016-03-01
Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.
Greenfeld, Max; van de Meent, Jan-Willem; Pavlichin, Dmitri S; Mabuchi, Hideo; Wiggins, Chris H; Gonzalez, Ruben L; Herschlag, Daniel
2015-01-16
Single-molecule techniques have emerged as incisive approaches for addressing a wide range of questions arising in contemporary biological research [Trends Biochem Sci 38:30-37, 2013; Nat Rev Genet 14:9-22, 2013; Curr Opin Struct Biol 2014, 28C:112-121; Annu Rev Biophys 43:19-39, 2014]. The analysis and interpretation of raw single-molecule data benefits greatly from the ongoing development of sophisticated statistical analysis tools that enable accurate inference at the low signal-to-noise ratios frequently associated with these measurements. While a number of groups have released analysis toolkits as open source software [J Phys Chem B 114:5386-5403, 2010; Biophys J 79:1915-1927, 2000; Biophys J 91:1941-1951, 2006; Biophys J 79:1928-1944, 2000; Biophys J 86:4015-4029, 2004; Biophys J 97:3196-3205, 2009; PLoS One 7:e30024, 2012; BMC Bioinformatics 288 11(8):S2, 2010; Biophys J 106:1327-1337, 2014; Proc Int Conf Mach Learn 28:361-369, 2013], it remains difficult to compare analysis for experiments performed in different labs due to a lack of standardization. Here we propose a standardized single-molecule dataset (SMD) file format. SMD is designed to accommodate a wide variety of computer programming languages, single-molecule techniques, and analysis strategies. To facilitate adoption of this format we have made two existing data analysis packages that are used for single-molecule analysis compatible with this format. Adoption of a common, standard data file format for sharing raw single-molecule data and analysis outcomes is a critical step for the emerging and powerful single-molecule field, which will benefit both sophisticated users and non-specialists by allowing standardized, transparent, and reproducible analysis practices.
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-03-01
This project aims to exploit the parallel computing power of a commercial Graphics Processing Unit (GPU) to implement fast pattern matching in the Ring Imaging Cherenkov (RICH) detector for the level 0 (L0) trigger of the NA62 experiment. In this approach, the ring-fitting algorithm is seedless, being fed with raw RICH data, with no previous information on the ring position from other detectors. Moreover, since the L0 trigger is provided with a more elaborated information than a simple multiplicity number, it results in a higher selection power. Two methods have been studied in order to reduce the data transfer latency from the readout boards of the detector to the GPU, i.e., the use of a dedicated NIC device driver with very low latency and a direct data transfer protocol from a custom FPGA-based NIC to the GPU. The performance of the system, developed through the FPGA approach, for multi-ring Cherenkov online reconstruction obtained during the NA62 physics runs is presented.
Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gawande, Nitin A.; Landwehr, Joshua B.; Daily, Jeffrey A.
Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of amore » cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.« less
Computational Methods for Predictive Simulation of Stochastic Turbulence Systems
2015-11-05
Science and Engineering, Venice , Italy, May 18-20, 2015, pp. 1261-1272. [21] Yong Li and P.D. Williams Analysis of the RAW Filter in Composite-Tendency...leapfrog scheme, Proceedings of the VI Conference on Computational Methods for Coupled Problems in Science and Engineering, Venice , Italy, May 18-20
The application of a computer data acquisition system to a new high temperature tribometer
NASA Technical Reports Server (NTRS)
Bonham, Charles D.; Dellacorte, Christopher
1991-01-01
The two data acquisition computer programs are described which were developed for a high temperature friction and wear test apparatus, a tribometer. The raw data produced by the tribometer and the methods used to sample that data are explained. In addition, the instrumentation and computer hardware and software are presented. Also shown is how computer data acquisition was applied to increase convenience and productivity on a high temperature tribometer.
The application of a computer data acquisition system for a new high temperature tribometer
NASA Technical Reports Server (NTRS)
Bonham, Charles D.; Dellacorte, Christopher
1990-01-01
The two data acquisition computer programs are described which were developed for a high temperature friction and wear test apparatus, a tribometer. The raw data produced by the tribometer and the methods used to sample that data are explained. In addition, the instrumentation and computer hardware and software are presented. Also shown is how computer data acquisition was applied to increase convenience and productivity on a high temperature tribometer.
TanDEM-X calibrated Raw DEM generation
NASA Astrophysics Data System (ADS)
Rossi, Cristian; Rodriguez Gonzalez, Fernando; Fritz, Thomas; Yague-Martinez, Nestor; Eineder, Michael
2012-09-01
The TanDEM-X mission successfully started on June 21st 2010 with the launch of the German radar satellite TDX, placed in orbit in close formation with the TerraSAR-X (TSX) satellite, and establishing the first spaceborne bistatic interferometer. The processing of SAR raw data to the Raw DEM is performed by one single processor, the Integrated TanDEM-X Processor (ITP). The quality of the Raw DEM is a fundamental parameter for the mission planning. In this paper, a novel quality indicator is derived. It is based on the comparison of the interferometric measure, the unwrapped phase, and the stereo-radargrammetric measure, the geometrical shifts computed in the coregistration stage. By stating the accuracy of the unwrapped phase, it constitutes a useful parameter for the determination of problematic scenes, which will be resubmitted to the dual baseline phase unwrapping processing chain for the mitigation of phase unwrapping errors. The stereo-radargrammetric measure is also operationally used for the Raw DEM absolute calibration through an accurate estimation of the absolute phase offset. This paper examines the interferometric algorithms implemented for the operational TanDEM-X Raw DEM generation, focusing particularly on its quality assessment and its calibration.
Application Level Processing for Long-Lived and Information Rich Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Wilkins, R.; Gaura, E.; Brusey, J.
2013-12-01
A primary design goal in Wireless Sensor Networks (WSNs) is to ensure the longest possible node lifetime with the available power budget while still meeting application requirements. Since radio transmissions often consume the most power in WSN devices, it follows that a node should aim to maximise its lifetime by transmitting only the data or information required to enable the motivating application. Full raw data streams are often not required since summaries of data are sufficient to meet application needs summaries are often performed at a central point after collection). When raw data is not a requirement, it makes sense to perform as much application-specific processing on-node as possible to minimise the amount of transmissions a node must make. For example, in home environment monitoring, the amount of time a room spends within an acceptable temperature range is more important than the raw stream of temperature measurements. The work presents Bare Necessities (BN) which implements the calculation of application-specific summaries on-node. In the case of knowing the amount time a room spends within an acceptable temperature range, BN encodes the raw signal as a distribution over bins (e.g. a bin might comprise temperatures between 18 °C and 22 °C). BN conserves power by only transmitting when changes to the distribution occur only sending the bare necessities of information the end user is interested in (thus the algorithm name). In the case of home monitoring it has been shown that BN can lead to a packet transmission reduction of 99.98%, increasing a nodes lifetime by a factor of 14 when compared to sense-and-send nodes. A summary of the Bare Necessities process at the node.
NASA Astrophysics Data System (ADS)
Cook, Perry
This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.). Although most people would think that analog synthesizers and electronic music substantially predate the use of computers in music, many experiments and complete computer music systems were being constructed and used as early as the 1950s.
Radar sensitivity and antenna scan pattern study for a satellite-based Radar Wind Sounder (RAWS)
NASA Technical Reports Server (NTRS)
Stuart, Michael A.
1992-01-01
Modeling global atmospheric circulations and forecasting the weather would improve greatly if worldwide information on winds aloft were available. Recognition of this led to the inclusion of the LAser Wind Sounder (LAWS) system to measure Doppler shifts from aerosols in the planned for Earth Observation System (EOS). However, gaps will exist in LAWS coverage where heavy clouds are present. The RAdar Wind Sensor (RAWS) is an instrument that could fill these gaps by measuring Doppler shifts from clouds and rain. Previous studies conducted at the University of Kansas show RAWS as a feasible instrument. This thesis pertains to the signal-to-noise ratio (SNR) sensitivity, transmit waveform, and limitations to the antenna scan pattern of the RAWS system. A dop-size distribution model is selected and applied to the radar range equation for the sensitivity analysis. Six frequencies are used in computing the SNR for several cloud types to determine the optimal transmit frequency. the results show the use of two frequencies, one higher (94 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) to obtain sensitivity for thinner cloud, and a lower frequency (24 GHz) for better penetration in rain, provide ample SNR. The waveform design supports covariance estimation processing. This estimator eliminates the Doppler ambiguities compounded by the selection of such high transmit frequencies, while providing an estimate of the mean frequency. the unambiguous range and velocity computation shows them to be within acceptable limits. The design goal for the RAWS system is to limit the wind-speed error to less than 1 ms(exp -1). Due to linear dependence between vectors for a three-vector scan pattern, a reasonable wind-speed error is unattainable. Only the two-vector scan pattern falls within the wind-error limits for azimuth angles between 16 deg to 70 deg. However, this scan only allows two components of the wind to be determined. As a result, a technique is then shown, based on the Z-R-V relationships, that permit the vertical component (i.e., rain) to be computed. Thus the horizontal wind components may be obtained form the covariance estimator and the vertical component from the reflectivity factor. Finally, a new candidate system is introduced which summarizes the parameters taken from previous RAWS studies, or those modified in this thesis.
Methods for gas detection using stationary hyperspectral imaging sensors
Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA
2012-04-24
According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.
Efficient heart beat detection using embedded system electronics
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Oh, Sechang; Varadan, Vijay K.
2014-04-01
The present day bio-technical field concentrates on developing various types of innovative ambulatory and wearable devices to monitor several bio-physical, physio-pathological, bio-electrical and bio-potential factors to assess a human body's health condition without intruding quotidian activities. One of the most important aspects of this evolving technology is monitoring heart beat rate and electrocardiogram (ECG) from which many other subsidiary results can be derived. Conventionally, the devices and systems consumes a lot of power since the acquired signals are always processed on the receiver end. Because of this back end processing, the unprocessed raw data is transmitted resulting in usage of more power, memory and processing time. This paper proposes an innovative technique where the acquired signals are processed by a microcontroller in the front end of the module and just the processed signal is then transmitted wirelessly to the display unit. Therefore, power consumption is considerably reduced and clearer data analysis is performed within the module. This also avoids the need for the user to be educated about usage of the device and signal/system analysis, since only the number of heart beats will displayed at the user end. Additionally, the proposed concept also eradicates the other disadvantages like obtrusiveness, high power consumption and size. To demonstrate the above said factors, a commercial controller board was used to extend the monitoring method by using the saved ECG data from a computer.
The Pattern of Soviet Conduct in the Third World, Review and Preview. Part I
1983-03-07
vital raw materials is concerned. This interest relates however above all to the rich, oil -producing countries, whereas the East German and Cuban...was economic - the extraction of cheap raw materials and the wish to find markets. Nor is it true, as he predicted, that the. imperialist powers...disappointment in the non- oil producing Third World countries than should have been expected because the Soviet leaders never made excessive. promises. They
Prevalence and etiology of false normal aEEG recordings in neonatal hypoxic-ischaemic encephalopathy
2013-01-01
Background Amplitude-integrated electroencephalography (aEEG) is a useful tool to determine the severity of neonatal hypoxic-ischemic encephalopathy (HIE). Our aim was to assess the prevalence and study the origin of false normal aEEG recordings based on 85 aEEG recordings registered before six hours of age. Methods Raw EEG recordings were reevaluated retrospectively with Fourier analysis to identify and describe the frequency patterns of the raw EEG signal, in cases with inconsistent aEEG recordings and clinical symptoms. Power spectral density curves, power (P) and median frequency (MF) were determined using the raw EEG. In 7 patients non-depolarizing muscle relaxant (NDMR) exposure was found. The EEG sections were analyzed and compared before and after NDMR administration. Results The reevaluation found that the aEEG was truly normal in 4 neonates. In 3 neonates, high voltage electrocardiographic (ECG) artifacts were found with flat trace on raw EEG. High frequency component (HFC) was found as a cause of normal appearing aEEG in 10 neonates. HFC disappeared while P and MF decreased significantly upon NDMR administration in each observed case. Conclusion Occurrence of false normal aEEG background pattern is relatively high in neonates with HIE and hypothermia. High frequency EEG artifacts suggestive of shivering were found to be the most common cause of false normal aEEG in hypothermic neonates while high voltage ECG artifacts are less common. PMID:24268061
Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide
NASA Technical Reports Server (NTRS)
Damico, S. J.
1980-01-01
Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.
Virtualizing Super-Computation On-Board Uas
NASA Astrophysics Data System (ADS)
Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.
2015-04-01
Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
Code of Federal Regulations, 2010 CFR
2010-01-01
... footnote 1 221112 Fossil Fuel Electric Power Generation See footnote 1 221113 Nuclear Electric Power... 500 323115 Digital Printing 500 323116 Manifold Business Forms Printing 500 323117 Books Printing 500... 424590 Other Farm Product Raw Material Merchant Wholesalers 100 424610 Plastics Materials and Basic Forms...
Wave Energy Prize - 1/50th Testing - Oscilla Power
Wesley Scharmen
2016-01-08
This submission of data includes all the 1/50th scale testing data completed on the Wave Energy Prize for the Oscilla Power team, and includes: 1/50th test data (raw & processed) 1/50th test data video and pictures 1/50th Test plans and testing documents SSTF_Submission (summarized results)
Wave Energy Prize - 1/50th Testing - Principle Power
Wesley Scharmen
2016-01-08
This submission of data includes all the 1/50th scale testing data completed on the Wave Energy Prize for the Principle Power team, and includes: 1/50th test data (raw & processed) 1/50th test data video and pictures 1/50th Test plans and testing documents SSTF_Submission (summarized results)
An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.
ERIC Educational Resources Information Center
Skakun, Ernest N.; Hakstian, A. Ralph
Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…
Renewable resources in the chemical industry--breaking away from oil?
Nordhoff, Stefan; Höcker, Hans; Gebhardt, Henrike
2007-12-01
Rising prices for fossil-based raw materials suggest that sooner or later renewable raw materials will, in principle, become economically viable. This paper examines this widespread paradigm. Price linkages like those seen for decades particularly in connection with petrochemical raw materials are now increasingly affecting renewable raw materials. The main driving force is the competing utilisation as an energy source because both fossil-based and renewable raw materials are used primarily for heat, electrical power and mobility. As a result, prices are determined by energy utilisation. Simple observations show how prices for renewable carbon sources are becoming linked to the crude oil price. Whether the application calls for sugar, starch, virgin oils or lignocellulose, the price for the raw material rises with the oil price. Consequently, expectations regarding price trends for fossil-based energy sources can also be utilised for the valuation of alternative processes. However, this seriously calls into question the assumption that a rising crude oil price will favour the economic viability of alternative products and processes based on renewable raw materials. Conversely, it follows that these products and processes must demonstrate economic viability today. Especially in connection with new approaches in white biotechnology, it is evident that, under realistic assumptions, particularly in terms of achievable yields and the optimisation potential of the underlying processes, the route to utilisation is economically viable. This makes the paradigm mentioned at the outset at least very questionable.
Fischer, Michael A; Leidner, Bertil; Kartalis, Nikolaos; Svensson, Anders; Aspelin, Peter; Albiin, Nils; Brismar, Torkel B
2014-01-01
To assess feasibility and image quality (IQ) of a new post-processing algorithm for retrospective extraction of an optimised multi-phase CT (time-resolved CT) of the liver from volumetric perfusion imaging. Sixteen patients underwent clinically indicated perfusion CT using 4D spiral mode of dual-source 128-slice CT. Three image sets were reconstructed: motion-corrected and noise-reduced (MCNR) images derived from 4D raw data; maximum and average intensity projections (time MIP/AVG) of the arterial/portal/portal-venous phases and all phases (total MIP/ AVG) derived from retrospective fusion of dedicated MCNR split series. Two readers assessed the IQ, detection rate and evaluation time; one reader assessed image noise and lesion-to-liver contrast. Time-resolved CT was feasible in all patients. Each post-processing step yielded a significant reduction of image noise and evaluation time, maintaining lesion-to-liver contrast. Time MIPs/AVGs showed the highest overall IQ without relevant motion artefacts and best depiction of arterial and portal/portal-venous phases respectively. Time MIPs demonstrated a significantly higher detection rate for arterialised liver lesions than total MIPs/AVGs and the raw data series. Time-resolved CT allows data from volumetric perfusion imaging to be condensed into an optimised multi-phase liver CT, yielding a superior IQ and higher detection rate for arterialised liver lesions than the raw data series. • Four-dimensional computed tomography is limited by motion artefacts and poor image quality. • Time-resolved-CT facilitates 4D-CT data visualisation, segmentation and analysis by condensing raw data. • Time-resolved CT demonstrates better image quality than raw data images. • Time-resolved CT improves detection of arterialised liver lesions in cirrhotic patients.
RAWS: The spaceborne radar wind sounder
NASA Technical Reports Server (NTRS)
Moore, Richard K.
1991-01-01
The concept of the Radar Wind Sounder (RAWS) is discussed. The goals of the RAWS is to estimate the following three qualities: the echo power, to determine rain rate and surface wind velocity; the mean Doppler frequency, to determine the wind velocity in hydrometers; and the spread of the Doppler frequency, to determine the turbulent spread of the wind velocity. Researchers made significant progress during the first year. The feasibility of the concept seems certain. Studies indicate that a reasonably sized system can measure in the presence of ice clouds and dense water clouds. No sensitivity problems exist in rainy environments. More research is needed on the application of the radar to the measurement of rain rates and winds at the sea surface.
Multi-Core Programming Design Patterns: Stream Processing Algorithms for Dynamic Scene Perceptions
2014-05-01
processor developed by IBM and other companies , incorpo- rates the verb—POWER5— processor as the Power Processor Element (PPE), one of the early general...deliver an power efficient single-precision peak performance of more than 256 GFlops. Substantially more raw power became available later, when nVIDIA ...algorithms, including IBM’s Cell/B.E., GPUs from NVidia and AMD and many-core CPUs from Intel.27 The vast growth of digital video content has been a
Lan, X Y; Zhao, S G; Zheng, N; Li, S L; Zhang, Y D; Liu, H M; McKillip, J; Wang, J Q
2017-06-01
Contamination of raw milk with bacterial pathogens is potentially hazardous to human health. The aim of this study was to evaluate the total bacteria count (TBC) and presence of pathogens in raw milk in Northern China along with the associated herd management practices. A total of 160 raw milk samples were collected from 80 dairy herds in Northern China. All raw milk samples were analyzed for TBC and pathogens by culturing. The results showed that the number of raw milk samples with TBC <2 × 10 6 cfu/mL and <1 × 10 5 cfu/mL was 146 (91.25%) and 70 (43.75%), respectively. A total of 84 (52.50%) raw milk samples were Staphylococcus aureus positive, 72 (45.00%) were Escherichia coli positive, 2 (1.25%) were Salmonella positive, 2 (1.25%) were Listeria monocytogenes positive, and 3 (1.88%) were Campylobacter positive. The prevalence of S. aureus was influenced by season, herd size, milking frequency, disinfection frequency, and use of a Dairy Herd Improvement program. The TBC was influenced by season and milk frequency. The correlation between TBC and prevalence of S. aureus or E. coli is significant. The effect size statistical analysis showed that season and herd (but not Dairy Herd Improvement, herd size, milking frequency, disinfection frequency, and area) were the most important factors affecting TBC in raw milk. In conclusion, the presence of bacteria in raw milk was associated with season and herd management practices, and further comprehensive study will be powerful for effectively characterizing various factors affecting milk microbial quality in bulk tanks in China. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gasification of torrefied fuel at power generation for decentralized consumers
NASA Astrophysics Data System (ADS)
Safin, R. R.; Khakimzyanov, I. F.; Galyavetdinov, N. R.; Mukhametzyanov, S. R.
2017-10-01
The increasing need of satisfaction of the existing needs of the population and the industry for electric energy, especially in the areas remote from the centralized energy supply, results in need of development of “small-scale energy generation”. At the same time, the basis in these regions is made by the energy stations, using imported fuel, which involve a problem of increase in cost and transportation of fuel to the place of consumption. The solution of this task is the use of the torrefied waste of woodworking and agricultural industry as fuel. The influence of temperature of torrefaction of wood fuel on the developed electric generator power is considered in the article. As a result of the experiments, it is revealed that at gasification of torrefied fuel from vegetable raw material, the generating gas with the increased content of hydrogen and carbon oxide, in comparison with gasification of the raw materials, is produced. Owing to this, the engine capacity increases that exerts direct impact on power generation by the electric generator.
A single-image method for x-ray refractive index CT.
Mittone, A; Gasilov, S; Brun, E; Bravin, A; Coan, P
2015-05-07
X-ray refraction-based computer tomography imaging is a well-established method for nondestructive investigations of various objects. In order to perform the 3D reconstruction of the index of refraction, two or more raw computed tomography phase-contrast images are usually acquired and combined to retrieve the refraction map (i.e. differential phase) signal within the sample. We suggest an approximate method to extract the refraction signal, which uses a single raw phase-contrast image. This method, here applied to analyzer-based phase-contrast imaging, is employed to retrieve the index of refraction map of a biological sample. The achieved accuracy in distinguishing the different tissues is comparable with the non-approximated approach. The suggested procedure can be used for precise refraction computer tomography with the advantage of a reduction of at least a factor of two of both the acquisition time and the dose delivered to the sample with respect to any of the other algorithms in the literature.
NASA Astrophysics Data System (ADS)
Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James
2017-01-01
A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.
Power mulchers can apply hardwood bark mulch
David M. Emanuel
1971-01-01
Two makes of power mulchers were evaluated for their ability to apply raw or processed hardwood bark mulch for use in revegetating disturbed soils. Tests were made to determine the uniformity of bark coverage and distance to which coverage was obtained. Moisture content and particle-size distribution of the barks used were also tested to determine whether or not these...
Wave Energy Prize - 1/50th Testing - RTI Wave Power
Wesley Scharmen
2015-12-18
This submission of data includes all the 1/50th scale testing data completed on the Wave Energy Prize for the RTI Wave Power team, and includes: 1/50th test data (raw & processed) 1/50th test data video and pictures 1/50th Test plans and testing documents SSTF_Submission (summarized results)
Ahamed, Nizam U; Sundaraj, Kenneth; Poo, Tarn S
2013-03-01
This article describes the design of a robust, inexpensive, easy-to-use, small, and portable online electromyography acquisition system for monitoring electromyography signals during rehabilitation. This single-channel (one-muscle) system was connected via the universal serial bus port to a programmable Windows operating system handheld tablet personal computer for storage and analysis of the data by the end user. The raw electromyography signals were amplified in order to convert them to an observable scale. The inherent noise of 50 Hz (Malaysia) from power lines electromagnetic interference was then eliminated using a single-hybrid IC notch filter. These signals were sampled by a signal processing module and converted into 24-bit digital data. An algorithm was developed and programmed to transmit the digital data to the computer, where it was reassembled and displayed in the computer using software. Finally, the following device was furnished with the graphical user interface to display the online muscle strength streaming signal in a handheld tablet personal computer. This battery-operated system was tested on the biceps brachii muscles of 20 healthy subjects, and the results were compared to those obtained with a commercial single-channel (one-muscle) electromyography acquisition system. The results obtained using the developed device when compared to those obtained from a commercially available physiological signal monitoring system for activities involving muscle contractions were found to be comparable (the comparison of various statistical parameters) between male and female subjects. In addition, the key advantage of this developed system over the conventional desktop personal computer-based acquisition systems is its portability due to the use of a tablet personal computer in which the results are accessible graphically as well as stored in text (comma-separated value) form.
A Computer-Based Laboratory Project for the Study of Stimulus Generalization and Peak Shift
ERIC Educational Resources Information Center
Derenne, Adam; Loshek, Eevett
2009-01-01
This paper describes materials designed for classroom projects on stimulus generalization and peak shift. A computer program (originally written in QuickBASIC) is used for data collection and a Microsoft Excel file with macros organizes the raw data on a spreadsheet and creates generalization gradients. The program is designed for use with human…
Application of ANNs approach for wave-like and heat-like equations
NASA Astrophysics Data System (ADS)
Jafarian, Ahmad; Baleanu, Dumitru
2017-12-01
Artificial neural networks are data processing systems which originate from human brain tissue studies. The remarkable abilities of these networks help us to derive desired results from complicated raw data. In this study, we intend to duplicate an efficient iterative method to the numerical solution of two famous partial differential equations, namely the wave-like and heat-like problems. It should be noted that many physical phenomena such as coupling currents in a flat multi-strand two-layer super conducting cable, non-homogeneous elastic waves in soils and earthquake stresses, are described by initial-boundary value wave and heat partial differential equations with variable coefficients. To the numerical solution of these equations, a combination of the power series method and artificial neural networks approach, is used to seek an appropriate bivariate polynomial solution of the mentioned initial-boundary value problem. Finally, several computer simulations confirmed the theoretical results and demonstrating applicability of the method.
Economic Aspects of the Chemical Industry
NASA Astrophysics Data System (ADS)
Koleske, Joseph V.
Within the formal disciplines of science at traditional universities, through the years, chemistry has grown to have a unique status because of its close correspondence with an industry and with a branch of engineering—the chemical industry and chemical engineering. There is no biology industry, but aspects of biology have closely related disciplines such as fish raising and other aquaculture, animal cloning and other facets of agriculture, ethical drugs of pharmaceutical manufacture, genomics, water quality and conservation, and the like. Although there is no physics industry, there are power generation, electricity, computers, optics, magnetic media, and electronics that exist as industries. However, in the case of chemistry, there is a named industry. This unusual correspondence no doubt came about because in the chemical industry one makes things from raw materials—chemicals—and the science, manufacture, and use of chemicals grew up together during the past century or so.
NASA Technical Reports Server (NTRS)
Ridd, M. K.; Merola, J. A.; Jaynes, R. A.
1983-01-01
Conversion of agricultural land to a variety of urban uses is a major problem along the Wasatch Front, Utah. Although LANDSAT MSS data is a relatively coarse tool for discriminating categories of change in urban-size plots, its availability prompts a thorough test of its power to detect change. The procedures being applied to a test area in Salt Lake County, Utah, where the land conversion problem is acute are presented. The identity of land uses before and after conversion was determined and digital procedures for doing so were compared. Several algorithms were compared, utilizing both raw data and preprocessed data. Verification of results involved high quality color infrared photography and field observation. Two data sets were digitally registered, specific change categories internally identified in the software, results tabulated by computer, and change maps printed at 1:24,000 scale.
Low-SWaP coincidence processing for Geiger-mode LIDAR video
NASA Astrophysics Data System (ADS)
Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.
2015-05-01
Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
Tang, Quan; Sheng, Wanqi; Li, Liyuan; Zheng, Liugen; Miao, Chunhui; Sun, Ruoyu
2018-08-01
The alteration behavior of minerals and hazardous elements during simulated combustion (100-1200 °C) of a raw coal collected from a power plant were studied. Thermogravimetric analysis indicated that there were mainly four alteration stages during coal combustion. The transformation behavior of mineral phases of raw coal, which were detected by X-ray polycrystalline diffraction (XRD) technique, mainly relied on the combustion temperature. A series of changes were derived from the intensities of mineral (e.g. clays) diffraction peaks when temperature surpassed 600 °C. Mineral phases tended to be simple and collapsed to amorphous glass when temperature reached up to 1200 °C. The characteristics of functional groups for raw coal and high-temperature (1200 °C) ash studied by Fourier transform infrared spectroscopy (FTIR) were in accordance with the result obtained from XRD analysis. The volatilization ratios of Co, Cr, Ni and V increased consistently with the increase of combustion temperature, suggesting these elements were gradually released from the organic matter and inorganic minerals of coal. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salama, A.; Mikhail, M.
Comprehensive software packages have been developed at the Western Research Centre as tools to help coal preparation engineers analyze, evaluate, and control coal cleaning processes. The COal Preparation Software package (COPS) performs three functions: (1) data handling and manipulation, (2) data analysis, including the generation of washability data, performance evaluation and prediction, density and size modeling, evaluation of density and size partition characteristics and attrition curves, and (3) generation of graphics output. The Separation ChARacteristics Estimation software packages (SCARE) are developed to balance raw density or size separation data. The cases of density and size separation data are considered. Themore » generated balanced data can take the balanced or normalized forms. The scaled form is desirable for direct determination of the partition functions (curves). The raw and generated separation data are displayed in tabular and/or graphical forms. The computer softwares described in this paper are valuable tools for coal preparation plant engineers and operators for evaluating process performance, adjusting plant parameters, and balancing raw density or size separation data. These packages have been applied very successfully in many projects carried out by WRC for the Canadian coal preparation industry. The software packages are designed to run on a personal computer (PC).« less
A comparison of thermal behaviors of raw biomass, pyrolytic biochar and their blends with lignite.
Liu, Zhengang; Balasubramanian, Rajasekhar
2013-10-01
In this study, thermal characteristics of raw biomass, corresponding pyrolytic biochars and their blends with lignite were investigated. The results showed that pyrolytic biochars had better fuel qualities than their parent biomass. In comparison to raw biomass, the combustion of the biochars shifted towards higher temperature and occurred at continuous temperature zones. The biochar addition in lignite increased the reactivities of the blends. Obvious interactions were observed between biomass/biochar and lignite and resulted in increased total burnout, shortened combustion time and increased maximum weight loss rate, indicating increased combustion efficiencies than that of lignite combustion alone. Regarding ash-related problems, the tendency to form slagging and fouling increased, when pyrolytic biochars were co-combusted with coal. This present study demonstrated that the pyrolytic biochars were more suitable than raw biomass to be co-combusted with lignite for energy generation in existing coal-fired power plants. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-01-01
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946
Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-09-09
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.
1991-01-01
Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.
Processing woody biomass with a modified horizontal grinder
Dana Mitchell; John Klepac
2008-01-01
This study documents the production rate and cost of producing woody biomass chips for use in a power plant. The power plant has specific raw material handling requirements. Output from a 3-knife chipper, a tub grinder, and a horizontal grinder was considered. None of the samples from these machines met the specifications needed. A horizontal grinder was modified to...
TRIO: Burst Buffer Based I/O Orchestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Teng; Oral, H Sarp; Pritchard, Michael
The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less
Lincoln, Don
2018-01-16
The LHC is the worldâs highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilabâs Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.
StagLab: Post-Processing and Visualisation in Geodynamics
NASA Astrophysics Data System (ADS)
Crameri, Fabio
2017-04-01
Despite being simplifications of nature, today's Geodynamic numerical models can, often do, and sometimes have to become very complex. Additionally, a steadily-increasing amount of raw model data results from more elaborate numerical codes and the still continuously-increasing computational power available for their execution. The current need for efficient post-processing and sensible visualisation is thus apparent. StagLab (www.fabiocrameri.ch/software) provides such much-needed strongly-automated post-processing in combination with state-of-the-art visualisation. Written in MatLab, StagLab is simple, flexible, efficient and reliable. It produces figures and movies that are both fully-reproducible and publication-ready. StagLab's post-processing capabilities include numerous diagnostics for plate tectonics and mantle dynamics. Featured are accurate plate-boundary identification, slab-polarity recognition, plate-bending derivation, mantle-plume detection, and surface-topography component splitting. These and many other diagnostics are derived conveniently from only a few parameter fields thanks to powerful image processing tools and other capable algorithms. Additionally, StagLab aims to prevent scientific visualisation pitfalls that are, unfortunately, still too common in the Geodynamics community. Misinterpretation of raw data and exclusion of colourblind people introduced with the continuous use of the rainbow (a.k.a. jet) colour scheme is just one, but a dramatic example (e.g., Rogowitz and Treinish, 1998; Light and Bartlein, 2004; Borland and Ii, 2007). StagLab is currently optimised for binary StagYY output (e.g., Tackley 2008), but is adjustable for the potential use with other Geodynamic codes. Additionally, StagLab's post-processing routines are open-source. REFERENCES Borland, D., and R. M. T. Ii (2007), Rainbow color map (still) considered harmful, IEEE Computer Graphics and Applications, 27(2), 14-17. Light, A., and P. J. Bartlein (2004), The end of the rainbow? Color schemes for improved data graphics, Eos Trans. AGU, 85(40), 385-391. Rogowitz, B. E., and L. A. Treinish (1998), Data visualization: the end of the rainbow, IEEE Spectrum, 35(12), 52-59, doi:10.1109/6.736450. Tackley, P.J (2008) Modelling compressible mantle convection with large viscosity contrasts in a three-dimensional spherical shell using the yin-yang grid. Physics of the Earth and Planetary Interiors 171(1-4), 7-18.
Computer simulation of the NASA water vapor electrolysis reactor
NASA Technical Reports Server (NTRS)
Bloom, A. M.
1974-01-01
The water vapor electrolysis (WVE) reactor is a spacecraft waste reclamation system for extended-mission manned spacecraft. The WVE reactor's raw material is water, its product oxygen. A computer simulation of the WVE operational processes provided the data required for an optimal design of the WVE unit. The simulation process was implemented with the aid of a FORTRAN IV routine.
Kysat-2 electrical power system design and analysis
NASA Astrophysics Data System (ADS)
Molton, Brandon L.
In 2012, Kentucky Space, LLC was offered the opportunity to design KYSat-2, a CubeSat mission which utilizes an experimental stellar-tracking camera system to test its effectiveness of determining the spacecraft's attitude while on orbit. Kentucky Space contracted Morehead State University to design the electrical power system (EPS) which will handle all power generation and power management and distribution to each of the KYSat-2 subsystems, including the flight computer, communications systems, and the experimental payload itself. This decision came as a result of the success of Morehead State's previous CubeSat mission, CXBN, which utilized a custom built power system and successfully launched in 2011. For the KYSat-2 EPS to be successful, it was important to design a system which was efficient enough to handle the power limitations of the space environment and robust enough to handle the challenges of powering a spacecraft on orbit. The system must be developed with a positive power budget, generating and storing more power than will be stored by KYSat-2 over mission lifetime. To accomplish this goal, the use of deployable solar panels has been utilized to double the usable surface area of the satellite for power generation, effectively doubling the usable power of the satellite system on orbit. The KYSat-2 EPS includes of set of gold plated deployable solar panels utilizing solar cells with a 26% efficiency. Power generated by this system is fed into a shunt regulator circuit which regulates the voltage generated to be stored in a 3-cell series battery pack. Stored powered is maintained using a balancing circuit which increases the efficiency and lifetime of the cells on-orbit. Power distribution includes raw battery voltage, four high-power outputs (two 5V and two 3.3 V) and a low-noise, low power 3.3V output for use with noise sensitive devices, such as microcontrollers. The solar panel deployment system utilizes the nichrome wire which draws current directly from the battery pack which a solid state relay receives logic-high signal. This nichrome wire, while under current, cuts a nylon wire which holds the solar panels in a stowed state prior to deployment on orbit. All logic control, current/voltage measurement, and commanding/communications is handled through the use of a Texas Instruments MSP430 microcontroller over UART serial communications. Results of the completed EPS demonstrated high-power output efficiencies approaching 90% under the highest anticipated loads while on orbit. They showed maximum noise levels of approximately +/- 41.30 mV at 83.10 MHz under maximum load. The low-noise 3.3V outputs displayed very little noise, however, this came at the cost of efficiency showing only 26% efficiency at the outputs when under maximum load. The EPS has been successfully integrated with other KYSat-2 subsystems including the spacecraft flight computer, in which the flight computer was able to communicate with the EPS and carry out its functions while functioning solely off the power distributed by the power system. Finally, testing on the solar panels show that a positive voltage margin was achieved when under light and the deployment system was able to cut the nylon wire completely under control by the EPS.
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias
2012-06-01
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
Exploring the Universe with WISE and Cloud Computing
NASA Technical Reports Server (NTRS)
Benford, Dominic J.
2011-01-01
WISE is a recently-completed astronomical survey mission that has imaged the entire sky in four infrared wavelength bands. The large quantity of science images returned consists of 2,776,922 individual snapshots in various locations in each band which, along with ancillary data, totals around 110TB of raw, uncompressed data. Making the most use of this data requires advanced computing resources. I will discuss some initial attempts in the use of cloud computing to make this large problem tractable.
Robotic Online Path Planning on Point Cloud.
Liu, Ming
2016-05-01
This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.
Power throttling of collections of computing elements
Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY
2011-08-16
An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.
Systematically linking tranSMART, Galaxy and EGA for reusing human translational research data
Zhang, Chao; Bijlard, Jochem; Staiger, Christine; Scollen, Serena; van Enckevort, David; Hoogstrate, Youri; Senf, Alexander; Hiltemann, Saskia; Repo, Susanna; Pipping, Wibo; Bierkens, Mariska; Payralbe, Stefan; Stringer, Bas; Heringa, Jaap; Stubbs, Andrew; Bonino Da Silva Santos, Luiz Olavo; Belien, Jeroen; Weistra, Ward; Azevedo, Rita; van Bochove, Kees; Meijer, Gerrit; Boiten, Jan-Willem; Rambla, Jordi; Fijneman, Remond; Spalding, J. Dylan; Abeln, Sanne
2017-01-01
The availability of high-throughput molecular profiling techniques has provided more accurate and informative data for regular clinical studies. Nevertheless, complex computational workflows are required to interpret these data. Over the past years, the data volume has been growing explosively, requiring robust human data management to organise and integrate the data efficiently. For this reason, we set up an ELIXIR implementation study, together with the Translational research IT (TraIT) programme, to design a data ecosystem that is able to link raw and interpreted data. In this project, the data from the TraIT Cell Line Use Case (TraIT-CLUC) are used as a test case for this system. Within this ecosystem, we use the European Genome-phenome Archive (EGA) to store raw molecular profiling data; tranSMART to collect interpreted molecular profiling data and clinical data for corresponding samples; and Galaxy to store, run and manage the computational workflows. We can integrate these data by linking their repositories systematically. To showcase our design, we have structured the TraIT-CLUC data, which contain a variety of molecular profiling data types, for storage in both tranSMART and EGA. The metadata provided allows referencing between tranSMART and EGA, fulfilling the cycle of data submission and discovery; we have also designed a data flow from EGA to Galaxy, enabling reanalysis of the raw data in Galaxy. In this way, users can select patient cohorts in tranSMART, trace them back to the raw data and perform (re)analysis in Galaxy. Our conclusion is that the majority of metadata does not necessarily need to be stored (redundantly) in both databases, but that instead FAIR persistent identifiers should be available for well-defined data ontology levels: study, data access committee, physical sample, data sample and raw data file. This approach will pave the way for the stable linkage and reuse of data. PMID:29123641
Systematically linking tranSMART, Galaxy and EGA for reusing human translational research data.
Zhang, Chao; Bijlard, Jochem; Staiger, Christine; Scollen, Serena; van Enckevort, David; Hoogstrate, Youri; Senf, Alexander; Hiltemann, Saskia; Repo, Susanna; Pipping, Wibo; Bierkens, Mariska; Payralbe, Stefan; Stringer, Bas; Heringa, Jaap; Stubbs, Andrew; Bonino Da Silva Santos, Luiz Olavo; Belien, Jeroen; Weistra, Ward; Azevedo, Rita; van Bochove, Kees; Meijer, Gerrit; Boiten, Jan-Willem; Rambla, Jordi; Fijneman, Remond; Spalding, J Dylan; Abeln, Sanne
2017-01-01
The availability of high-throughput molecular profiling techniques has provided more accurate and informative data for regular clinical studies. Nevertheless, complex computational workflows are required to interpret these data. Over the past years, the data volume has been growing explosively, requiring robust human data management to organise and integrate the data efficiently. For this reason, we set up an ELIXIR implementation study, together with the Translational research IT (TraIT) programme, to design a data ecosystem that is able to link raw and interpreted data. In this project, the data from the TraIT Cell Line Use Case (TraIT-CLUC) are used as a test case for this system. Within this ecosystem, we use the European Genome-phenome Archive (EGA) to store raw molecular profiling data; tranSMART to collect interpreted molecular profiling data and clinical data for corresponding samples; and Galaxy to store, run and manage the computational workflows. We can integrate these data by linking their repositories systematically. To showcase our design, we have structured the TraIT-CLUC data, which contain a variety of molecular profiling data types, for storage in both tranSMART and EGA. The metadata provided allows referencing between tranSMART and EGA, fulfilling the cycle of data submission and discovery; we have also designed a data flow from EGA to Galaxy, enabling reanalysis of the raw data in Galaxy. In this way, users can select patient cohorts in tranSMART, trace them back to the raw data and perform (re)analysis in Galaxy. Our conclusion is that the majority of metadata does not necessarily need to be stored (redundantly) in both databases, but that instead FAIR persistent identifiers should be available for well-defined data ontology levels: study, data access committee, physical sample, data sample and raw data file. This approach will pave the way for the stable linkage and reuse of data.
NASA Technical Reports Server (NTRS)
Fleig, A. J.; Heath, D. F.; Klenk, K. F.; Oslik, N.; Lee, K. D.; Park, H.; Bhartia, P. K.; Gordon, D.
1983-01-01
Raw data from the Solar Backscattered Ultrviolet/Total Ozone Mapping Spectrometer (SBUV/TOMS) Nimbus 7 operation are available on computer tape. These data are contained on two separate sets of RUTs (Raw Units Tapes) for SBUV and TOMS, labelled RUT-S and RUT-T respectively. The RUT-S and RUT-T tapes contain uncalibrated radiance and irradiance data, housekeeping data, wavelength and electronic calibration data, instrument field-of-view location and solar ephemeris information. These tapes also contain colocated cloud, terrain pressure and snow/ice thickness data, each derived from an independent source. The "RUT User's Guide" describes the SBUV and TOMS experiments, the instrument calibration and performance, operating schedules, and data coverage, and provides an assessment of RUT-S and -T data quality. It also provides detailed information on the data available on the computer tapes.
An effective rectification method for lenselet-based plenoptic cameras
NASA Astrophysics Data System (ADS)
Jin, Jing; Cao, Yiwei; Cai, Weijia; Zheng, Wanlu; Zhou, Ping
2016-10-01
The Lenselet-Based Plenoptic has recently drawn a lot of attention in the field of computational photography. The additional information inherent in light field allows a wide range of applications, but some preliminary processing of the raw image is necessary before further operations. In this paper, an effective method is presented for the rotation rectification of the raw image. The rotation is caused by imperfectly position of micro-lens array relative to the sensor plane in commercially available Lytro plenoptic cameras. The key to our method is locating the center of each microlens image, which is projected by a micro-lens. Because of vignetting, the pixel values at centers of the micro-lens image are higher than those at the peripheries. A mask is applied to probe the micro-lens image to locate the center area by finding the local maximum response. The error of the center coordinate estimate is corrected and the angle of rotation is computed via a subsequent line fitting. The algorithm is performed on two images captured by different Lytro cameras. The angles of rotation are -0.3600° and -0.0621° respectively and the rectified raw image is useful and reliable for further operations, such as extraction of the sub-aperture images. The experimental results demonstrate that our method is efficient and accurate.
Suitability of a new mixed-strain starter for manufacturing uncooked raw ewe's milk cheeses.
Feutry, Fabienne; Torre, Paloma; Arana, Ines; Garcia, Susana; Pérez Elortondo, Francisco J; Berthier, Françoise
2016-06-01
Most raw milk Ossau-Iraty cheeses are currently manufactured on-farm using the same commercial streptococcal-lactococcal starter (S1). One way to enhance the microbial diversity that gives raw milk its advantages for cheese-making is to formulate new starters combining diverse, characterized strains. A new starter (OI) combining 6 raw milk strains of lactococci, recently isolated and characterized, was tested in parallel with the current starter by making 12 Ossau-Iraty raw milk cheeses at 3 farmhouses under the conditions prevailing at each farm. Compliance of the sensory characteristics with those expected by the Ossau-Iraty professionals, physicochemical parameters and coliforms were quantified at key manufacturing steps. The new starter OI gave cheeses having proper compliance but having lower compliance than the S1 cheeses under most manufacturing conditions, while managing coliform levels equally well as starter S1. This lower compliance relied more on the absence of Streptococcus thermophilus in starter OI, than on the nature of the lactoccocal strains present in starter OI. The study also shows that variations in 5 technological parameters during the first day of manufacture, within the range of values applied in the 3 farmhouses, are powerful tools for diversifying the scores for the sensory characteristics investigated. Copyright © 2015 Elsevier Ltd. All rights reserved.
GPS Data Filtration Method for Drive Cycle Analysis Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Earleywine, M.
2013-02-01
When employing GPS data acquisition systems to capture vehicle drive-cycle information, a number of errors often appear in the raw data samples, such as sudden signal loss, extraneous or outlying data points, speed drifting, and signal white noise, all of which limit the quality of field data for use in downstream applications. Unaddressed, these errors significantly impact the reliability of source data and limit the effectiveness of traditional drive-cycle analysis approaches and vehicle simulation software. Without reliable speed and time information, the validity of derived metrics for drive cycles, such as acceleration, power, and distance, become questionable. This study exploresmore » some of the common sources of error present in raw onboard GPS data and presents a detailed filtering process designed to correct for these issues. Test data from both light and medium/heavy duty applications are examined to illustrate the effectiveness of the proposed filtration process across the range of vehicle vocations. Graphical comparisons of raw and filtered cycles are presented, and statistical analyses are performed to determine the effects of the proposed filtration process on raw data. Finally, an evaluation of the overall benefits of data filtration on raw GPS data and present potential areas for continued research is presented.« less
DMSP SSJ4 Data Restoration, Classification, and On-Line Data Access
NASA Technical Reports Server (NTRS)
Wing, Simon; Bredekamp, Joseph H. (Technical Monitor)
2000-01-01
Compress and clean raw data file for permanent storage We have identified various error conditions/types and developed algorithms to get rid of these errors/noises, including the more complicated noise in the newer data sets. (status = 100% complete). Internet access of compacted raw data. It is now possible to access the raw data via our web site, http://www.jhuapl.edu/Aurora/index.html. The software to read and plot the compacted raw data is also available from the same web site. The users can now download the raw data, read, plot, or manipulate the data as they wish on their own computer. The users are able to access the cleaned data sets. Internet access of the color spectrograms. This task has also been completed. It is now possible to access the spectrograms from the web site mentioned above. Improve the particle precipitation region classification. The algorithm for doing this task has been developed and implemented. As a result, the accuracies improved. Now the web site routinely distributes the results of applying the new algorithm to the cleaned data set. Mark the classification region on the spectrograms. The software to mark the classification region in the spectrograms has been completed. This is also available from our web site.
NASA Astrophysics Data System (ADS)
Bezruczko, N.; Fatani, S. S.
2010-07-01
Social researchers commonly compute ordinal raw scores and ratings to quantify human aptitudes, attitudes, and abilities but without a clear understanding of their limitations for scientific knowledge. In this research, common ordinal measures were compared to higher order linear (equal interval) scale measures to clarify implications for objectivity, precision, ontological coherence, and meaningfulness. Raw score gains, residualized raw gains, and linear gains calculated with a Rasch model were compared between Time 1 and Time 2 for observations from two early childhood learning assessments. Comparisons show major inconsistencies between ratings and linear gains. When gain distribution was dense, relatively compact, and initial status near item mid-range, linear measures and ratings were indistinguishable. When Time 1 status was distributed more broadly and magnitude of change variable, ratings were unrelated to linear gain, which emphasizes problematic implications of ordinal measures. Surprisingly, residualized gain scores did not significantly improve ordinal measurement of change. In general, raw scores and ratings may be meaningful in specific samples to establish order and high/low rank, but raw score differences suffer from non-uniform units. Even meaningfulness of sample comparisons, as well as derived proportions and percentages, are seriously affected by rank order distortions and should be avoided.
Operation of the Australian Store.Synchrotron for macromolecular crystallography
Meyer, Grischa R.; Aragão, David; Mudie, Nathan J.; Caradoc-Davies, Tom T.; McGowan, Sheena; Bertling, Philip J.; Groenewegen, David; Quenette, Stevan M.; Bond, Charles S.; Buckle, Ashley M.; Androulakis, Steve
2014-01-01
The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community. PMID:25286837
Operation of the Australian Store.Synchrotron for macromolecular crystallography.
Meyer, Grischa R; Aragão, David; Mudie, Nathan J; Caradoc-Davies, Tom T; McGowan, Sheena; Bertling, Philip J; Groenewegen, David; Quenette, Stevan M; Bond, Charles S; Buckle, Ashley M; Androulakis, Steve
2014-10-01
The Store.Synchrotron service, a fully functional, cloud computing-based solution to raw X-ray data archiving and dissemination at the Australian Synchrotron, is described. The service automatically receives and archives raw diffraction data, related metadata and preliminary results of automated data-processing workflows. Data are able to be shared with collaborators and opened to the public. In the nine months since its deployment in August 2013, the service has handled over 22.4 TB of raw data (∼1.7 million diffraction images). Several real examples from the Australian crystallographic community are described that illustrate the advantages of the approach, which include real-time online data access and fully redundant, secure storage. Discoveries in biological sciences increasingly require multidisciplinary approaches. With this in mind, Store.Synchrotron has been developed as a component within a greater service that can combine data from other instruments at the Australian Synchrotron, as well as instruments at the Australian neutron source ANSTO. It is therefore envisaged that this will serve as a model implementation of raw data archiving and dissemination within the structural biology research community.
Sadeghi, Zahra
2016-09-01
In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Begni, G.; BOISSIN; Desachy, M. J.; PERBOS
1984-01-01
The geometric accuray of LANDSAT TM raw data of Toulouse (France) raw data of Mississippi, and preprocessed data of Mississippi was examined using a CDC computer. Analog images were restituted on the VIZIR SEP device. The methods used for line to line and band to band registration are based on automatic correlation techniques and are widely used in automated image to image registration at CNES. Causes of intraband and interband misregistration are identified and statistics are given for both line to line and band to band misregistration.
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Afanasev, P. I.; Bulbasheva, I. A.
2017-10-01
The monitoring and survey results of drilling and blasting operations are specified during the development of Afanasyevsky deposit of cement raw materials for a 110 kV electricity power lines structure. Seismic explosion waves and air shock waves were registered in the course of monitoring. The dependency of peak particle velocities on the scaled distance and explosive weight by the delay time was obtained.
Connectivity and control in the year 2000 and beyond.
Nolan, R L; Brennan, J; Coyne, K P; Spong, S; Spar, J; Strauss, N; Milan, T; Speight, D; Tedlow, R S; Gillotti, D; Yardeni, E; Block, D J; Radin, S A; Sheinheit, S; Robbins, B
1998-01-01
By now, most executives are familiar with the famous Year 2000 problem--and many believe that their companies have the situation well in hand. After all, it seems to be such a trivial problem--computer software that interprets "oo" to be the year 1900 instead of the year 2000. And yet armies of computer professionals have been working on it--updating code in payroll systems, distribution systems, actuarial systems, sales-tracking systems, and the like. The problem is pervasive. Not only is it in your systems, it's in your suppliers' systems, your bankers' systems, and your customers' systems. It's embedded in chips that control elevators, automated teller machines, process-control equipment, and power grids. Already, a dried-food manufacturer destroyed millions of dollars of perfectly good product when a computer counted inventory marked with an expiration date of "oo" as nearly a hundred years old. And when managers of a sewage-control plant turned the clock to January I, 2000 on a computer system they thought had been fixed, raw sewage pumped directly into the harbor. It has become apparent that there will not be enough time to find and fix all of the problems by January I, 2000. And what good will it do if your computers work but they're connected with systems that don't? That is one of the questions Harvard Business School professor Richard Nolan asks in his introduction to HBR's Perspectives on the Year 2000 issue. How will you prepare your organization to respond when things start to go wrong? Fourteen commentators offer their ideas on how senior managers should think about connectivity and control in the year 2000 and beyond.
3 CFR 9056 - Proclamation 9056 of November 8, 2013. World Freedom Day, 2013
Code of Federal Regulations, 2014 CFR
2014-01-01
... gave way to new democracies. On World Freedom Day, we remember that for all the raw power of... those still struggling to throw off the weight of oppression and embrace a brighter day. NOW, THEREFORE...
de Araujo Furtado, Marcio; Zheng, Andy; Sedigh-Sarvestani, Madineh; Lumley, Lucille; Lichtenstein, Spencer; Yourick, Debra
2009-10-30
The organophosphorous compound soman is an acetylcholinesterase inhibitor that causes damage to the brain. Exposure to soman causes neuropathology as a result of prolonged and recurrent seizures. In the present study, long-term recordings of cortical EEG were used to develop an unbiased means to quantify measures of seizure activity in a large data set while excluding other signal types. Rats were implanted with telemetry transmitters and exposed to soman followed by treatment with therapeutics similar to those administered in the field after nerve agent exposure. EEG, activity and temperature were recorded continuously for a minimum of 2 days pre-exposure and 15 days post-exposure. A set of automatic MATLAB algorithms have been developed to remove artifacts and measure the characteristics of long-term EEG recordings. The algorithms use short-time Fourier transforms to compute the power spectrum of the signal for 2-s intervals. The spectrum is then divided into the delta, theta, alpha, and beta frequency bands. A linear fit to the power spectrum is used to distinguish normal EEG activity from artifacts and high amplitude spike wave activity. Changes in time spent in seizure over a prolonged period are a powerful indicator of the effects of novel therapeutics against seizures. A graphical user interface has been created that simultaneously plots the raw EEG in the time domain, the power spectrum, and the wavelet transform. Motor activity and temperature are associated with EEG changes. The accuracy of this algorithm is also verified against visual inspection of video recordings up to 3 days after exposure.
Activity recognition using a single accelerometer placed at the wrist or ankle.
Mannini, Andrea; Intille, Stephen S; Rosenberger, Mary; Sabatini, Angelo M; Haskell, William
2013-11-01
Large physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities. Participants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated. With 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%. A classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency.
Profiling an application for power consumption during execution on a compute node
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-09-17
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-12-01
The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.
Software for the First Station of the Long Wavelength Array
NASA Astrophysics Data System (ADS)
Dowell, J.; LWA Collaboration
2014-05-01
The first station of the Long Wavelength Array, LWA1, is currently operating at frequencies between 10 and 88 MHz in the Southwest United States. LWA1 consists of 256 cross-polarization dipole pairs spread over a 100 m aperture with five total-power outriggers up to ˜500 m from the center of the station. The raw voltages from the antennas are digitized and digitally combined to form four independent dual polarization beams, each with two tunings with up to 19.6 MHz of bandwidth. The telescope is designed to be a general-purpose instrument and supports a wide variety of science projects from the ionosphere to the cosmic dark ages. I will present the software behind this telescope and discuss the challenges associated with calibrating and maintaining an array of 261 dipoles. I will also discuss some of the challenges of handling the large data volume that LWA1 produces and how the LWA User Computing Facility helps address those problems.
Combining multiple features for color texture classification
NASA Astrophysics Data System (ADS)
Cusano, Claudio; Napoletano, Paolo; Schettini, Raimondo
2016-11-01
The analysis of color and texture has a long history in image analysis and computer vision. These two properties are often considered as independent, even though they are strongly related in images of natural objects and materials. Correlation between color and texture information is especially relevant in the case of variable illumination, a condition that has a crucial impact on the effectiveness of most visual descriptors. We propose an ensemble of hand-crafted image descriptors designed to capture different aspects of color textures. We show that the use of these descriptors in a multiple classifiers framework makes it possible to achieve a very high classification accuracy in classifying texture images acquired under different lighting conditions. A powerful alternative to hand-crafted descriptors is represented by features obtained with deep learning methods. We also show how the proposed combining strategy hand-crafted and convolutional neural networks features can be used together to further improve the classification accuracy. Experimental results on a food database (raw food texture) demonstrate the effectiveness of the proposed strategy.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2015-07-15
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Huang, Rao; Lo, Li-Ta; Wen, Yuhua; Voter, Arthur F; Perez, Danny
2017-10-21
Modern molecular-dynamics-based techniques are extremely powerful to investigate the dynamical evolution of materials. With the increase in sophistication of the simulation techniques and the ubiquity of massively parallel computing platforms, atomistic simulations now generate very large amounts of data, which have to be carefully analyzed in order to reveal key features of the underlying trajectories, including the nature and characteristics of the relevant reaction pathways. We show that clustering algorithms, such as the Perron Cluster Cluster Analysis, can provide reduced representations that greatly facilitate the interpretation of complex trajectories. To illustrate this point, clustering tools are used to identify the key kinetic steps in complex accelerated molecular dynamics trajectories exhibiting shape fluctuations in Pt nanoclusters. This analysis provides an easily interpretable coarse representation of the reaction pathways in terms of a handful of clusters, in contrast to the raw trajectory that contains thousands of unique states and tens of thousands of transitions.
NASA Astrophysics Data System (ADS)
Huang, Rao; Lo, Li-Ta; Wen, Yuhua; Voter, Arthur F.; Perez, Danny
2017-10-01
Modern molecular-dynamics-based techniques are extremely powerful to investigate the dynamical evolution of materials. With the increase in sophistication of the simulation techniques and the ubiquity of massively parallel computing platforms, atomistic simulations now generate very large amounts of data, which have to be carefully analyzed in order to reveal key features of the underlying trajectories, including the nature and characteristics of the relevant reaction pathways. We show that clustering algorithms, such as the Perron Cluster Cluster Analysis, can provide reduced representations that greatly facilitate the interpretation of complex trajectories. To illustrate this point, clustering tools are used to identify the key kinetic steps in complex accelerated molecular dynamics trajectories exhibiting shape fluctuations in Pt nanoclusters. This analysis provides an easily interpretable coarse representation of the reaction pathways in terms of a handful of clusters, in contrast to the raw trajectory that contains thousands of unique states and tens of thousands of transitions.
Minimizing data transfer with sustained performance in wireless brain-machine interfaces
NASA Astrophysics Data System (ADS)
Thor Thorbergsson, Palmi; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J.
2012-06-01
Brain-machine interfaces (BMIs) may be used to investigate neural mechanisms or to treat the symptoms of neurological disease and are hence powerful tools in research and clinical practice. Wireless BMIs add flexibility to both types of applications by reducing movement restrictions and risks associated with transcutaneous leads. However, since wireless implementations are typically limited in terms of transmission capacity and energy resources, the major challenge faced by their designers is to combine high performance with adaptations to limited resources. Here, we have identified three key steps in dealing with this challenge: (1) the purpose of the BMI should be clearly specified with regard to the type of information to be processed; (2) the amount of raw input data needed to fulfill the purpose should be determined, in order to avoid over- or under-dimensioning of the design; and (3) processing tasks should be allocated among the system parts such that all of them are utilized optimally with respect to computational power, wireless link capacity and raw input data requirements. We have focused on step (2) under the assumption that the purpose of the BMI (step 1) is to assess single- or multi-unit neuronal activity in the central nervous system with single-channel extracellular recordings. The reliability of this assessment depends on performance in detection and sorting of spikes. We have therefore performed absolute threshold spike detection and spike sorting with the principal component analysis and fuzzy c-means on a set of synthetic extracellular recordings, while varying the sampling rate and resolution, noise level and number of target units, and used the known ground truth to quantitatively estimate the performance. From the calculated performance curves, we have identified the sampling rate and resolution breakpoints, beyond which performance is not expected to increase by more than 1-5%. We have then estimated the performance of alternative algorithms for spike detection and spike sorting in order to examine the generalizability of our results to other algorithms. Our findings indicate that the minimization of recording noise is the primary factor to consider in the design process. In most cases, there are breakpoints for sampling rates and resolution that provide guidelines for BMI designers in terms of minimum amount raw input data that guarantees sustained performance. Such guidelines are essential during system dimensioning. Based on these findings we conclude by presenting a quantitative task-allocation scheme that can be followed to achieve optimal utilization of available resources.
Minimizing data transfer with sustained performance in wireless brain-machine interfaces.
Thorbergsson, Palmi Thor; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J
2012-06-01
Brain-machine interfaces (BMIs) may be used to investigate neural mechanisms or to treat the symptoms of neurological disease and are hence powerful tools in research and clinical practice. Wireless BMIs add flexibility to both types of applications by reducing movement restrictions and risks associated with transcutaneous leads. However, since wireless implementations are typically limited in terms of transmission capacity and energy resources, the major challenge faced by their designers is to combine high performance with adaptations to limited resources. Here, we have identified three key steps in dealing with this challenge: (1) the purpose of the BMI should be clearly specified with regard to the type of information to be processed; (2) the amount of raw input data needed to fulfill the purpose should be determined, in order to avoid over- or under-dimensioning of the design; and (3) processing tasks should be allocated among the system parts such that all of them are utilized optimally with respect to computational power, wireless link capacity and raw input data requirements. We have focused on step (2) under the assumption that the purpose of the BMI (step 1) is to assess single- or multi-unit neuronal activity in the central nervous system with single-channel extracellular recordings. The reliability of this assessment depends on performance in detection and sorting of spikes. We have therefore performed absolute threshold spike detection and spike sorting with the principal component analysis and fuzzy c-means on a set of synthetic extracellular recordings, while varying the sampling rate and resolution, noise level and number of target units, and used the known ground truth to quantitatively estimate the performance. From the calculated performance curves, we have identified the sampling rate and resolution breakpoints, beyond which performance is not expected to increase by more than 1-5%. We have then estimated the performance of alternative algorithms for spike detection and spike sorting in order to examine the generalizability of our results to other algorithms. Our findings indicate that the minimization of recording noise is the primary factor to consider in the design process. In most cases, there are breakpoints for sampling rates and resolution that provide guidelines for BMI designers in terms of minimum amount raw input data that guarantees sustained performance. Such guidelines are essential during system dimensioning. Based on these findings we conclude by presenting a quantitative task-allocation scheme that can be followed to achieve optimal utilization of available resources.
Preprocessing of gene expression data by optimally robust estimators
2010-01-01
Background The preprocessing of gene expression data obtained from several platforms routinely includes the aggregation of multiple raw signal intensities to one expression value. Examples are the computation of a single expression measure based on the perfect match (PM) and mismatch (MM) probes for the Affymetrix technology, the summarization of bead level values to bead summary values for the Illumina technology or the aggregation of replicated measurements in the case of other technologies including real-time quantitative polymerase chain reaction (RT-qPCR) platforms. The summarization of technical replicates is also performed in other "-omics" disciplines like proteomics or metabolomics. Preprocessing methods like MAS 5.0, Illumina's default summarization method, RMA, or VSN show that the use of robust estimators is widely accepted in gene expression analysis. However, the selection of robust methods seems to be mainly driven by their high breakdown point and not by efficiency. Results We describe how optimally robust radius-minimax (rmx) estimators, i.e. estimators that minimize an asymptotic maximum risk on shrinking neighborhoods about an ideal model, can be used for the aggregation of multiple raw signal intensities to one expression value for Affymetrix and Illumina data. With regard to the Affymetrix data, we have implemented an algorithm which is a variant of MAS 5.0. Using datasets from the literature and Monte-Carlo simulations we provide some reasoning for assuming approximate log-normal distributions of the raw signal intensities by means of the Kolmogorov distance, at least for the discussed datasets, and compare the results of our preprocessing algorithms with the results of Affymetrix's MAS 5.0 and Illumina's default method. The numerical results indicate that when using rmx estimators an accuracy improvement of about 10-20% is obtained compared to Affymetrix's MAS 5.0 and about 1-5% compared to Illumina's default method. The improvement is also visible in the analysis of technical replicates where the reproducibility of the values (in terms of Pearson and Spearman correlation) is increased for all Affymetrix and almost all Illumina examples considered. Our algorithms are implemented in the R package named RobLoxBioC which is publicly available via CRAN, The Comprehensive R Archive Network (http://cran.r-project.org/web/packages/RobLoxBioC/). Conclusions Optimally robust rmx estimators have a high breakdown point and are computationally feasible. They can lead to a considerable gain in efficiency for well-established bioinformatics procedures and thus, can increase the reproducibility and power of subsequent statistical analysis. PMID:21118506
1978-08-01
21- accepts piping geometry as one of its basic inputs; whether this geometry comes from arrangement drawings or models is of no real consequence. c ... computer . Geometric data is taken from the catalogue and automatically merged with the piping geometry data. Also, fitting orientation is automatically...systems require a number of data manipulation routines to convert raw digitized data into logical pipe geometry acceptable to a computer -aided piping design
Cloud computing for genomic data analysis and collaboration.
Langmead, Ben; Nellore, Abhinav
2018-04-01
Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.
NASA Astrophysics Data System (ADS)
Smuga-Otto, M. J.; Garcia, R. K.; Knuteson, R. O.; Martin, G. D.; Flynn, B. M.; Hackel, D.
2006-12-01
The University of Wisconsin-Madison Space Science and Engineering Center (UW-SSEC) is developing tools to help scientists realize the potential of high spectral resolution instruments for atmospheric science. Upcoming satellite spectrometers like the Cross-track Infrared Sounder (CrIS), experimental instruments like the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and proposed instruments like the Hyperspectral Environmental Suite (HES) within the GOES-R project will present a challenge in the form of the overwhelmingly large amounts of continuously generated data. Current and near-future workstations will have neither the storage space nor computational capacity to cope with raw spectral data spanning more than a few minutes of observations from these instruments. Schemes exist for processing raw data from hyperspectral instruments currently in testing, that involve distributed computation across clusters. Data, which for an instrument like GIFTS can amount to over 1.5 Terabytes per day, is carefully managed on Storage Area Networks (SANs), with attention paid to proper maintenance of associated metadata. The UW-SSEC is preparing a demonstration integrating these back-end capabilities as part of a larger visualization framework, to assist scientists in developing new products from high spectral data, sourcing data volumes they could not otherwise manage. This demonstration focuses on managing storage so that only the data specifically needed for the desired product are pulled from the SAN, and on running computationally expensive intermediate processing on a back-end cluster, with the final product being sent to a visualization system on the scientist's workstation. Where possible, existing software and solutions are used to reduce cost of development. The heart of the computing component is the GIFTS Information Processing System (GIPS), developed at the UW- SSEC to allow distribution of processing tasks such as conversion of raw GIFTS interferograms into calibrated radiance spectra, and retrieving temperature and water vapor content atmospheric profiles from these spectra. The hope is that by demonstrating the capabilities afforded by a composite system like the one described here, scientists can be convinced to contribute further algorithms in support of this model of computing and visualization.
NASA Astrophysics Data System (ADS)
Abdeljaber, Osama; Avci, Onur; Kiranyaz, Serkan; Gabbouj, Moncef; Inman, Daniel J.
2017-02-01
Structural health monitoring (SHM) and vibration-based structural damage detection have been a continuous interest for civil, mechanical and aerospace engineers over the decades. Early and meticulous damage detection has always been one of the principal objectives of SHM applications. The performance of a classical damage detection system predominantly depends on the choice of the features and the classifier. While the fixed and hand-crafted features may either be a sub-optimal choice for a particular structure or fail to achieve the same level of performance on another structure, they usually require a large computation power which may hinder their usage for real-time structural damage detection. This paper presents a novel, fast and accurate structural damage detection system using 1D Convolutional Neural Networks (CNNs) that has an inherent adaptive design to fuse both feature extraction and classification blocks into a single and compact learning body. The proposed method performs vibration-based damage detection and localization of the damage in real-time. The advantage of this approach is its ability to extract optimal damage-sensitive features automatically from the raw acceleration signals. Large-scale experiments conducted on a grandstand simulator revealed an outstanding performance and verified the computational efficiency of the proposed real-time damage detection method.
Luo, Ruibang; Wong, Yiu-Lun; Law, Wai-Chun; Lee, Lap-Kei; Cheung, Jeanno; Liu, Chi-Man; Lam, Tak-Wah
2014-01-01
This paper reports an integrated solution, called BALSA, for the secondary analysis of next generation sequencing data; it exploits the computational power of GPU and an intricate memory management to give a fast and accurate analysis. From raw reads to variants (including SNPs and Indels), BALSA, using just a single computing node with a commodity GPU board, takes 5.5 h to process 50-fold whole genome sequencing (∼750 million 100 bp paired-end reads), or just 25 min for 210-fold whole exome sequencing. BALSA's speed is rooted at its parallel algorithms to effectively exploit a GPU to speed up processes like alignment, realignment and statistical testing. BALSA incorporates a 16-genotype model to support the calling of SNPs and Indels and achieves competitive variant calling accuracy and sensitivity when compared to the ensemble of six popular variant callers. BALSA also supports efficient identification of somatic SNVs and CNVs; experiments showed that BALSA recovers all the previously validated somatic SNVs and CNVs, and it is more sensitive for somatic Indel detection. BALSA outputs variants in VCF format. A pileup-like SNAPSHOT format, while maintaining the same fidelity as BAM in variant calling, enables efficient storage and indexing, and facilitates the App development of downstream analyses. BALSA is available at: http://sourceforge.net/p/balsa.
Development and testing of tip devices for horizontal axis wind turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyatt, G.W.; Lissaman, P.B.S.
1985-05-01
A theoretical and field experimental program has been carried out to investigate the use of tip devices on horizontal axis wind turbine rotors. Objective was to improve performance by the reduction of tip losses. A vortex lattice computer model was used to optimize three basic tip configuration types for a 25 kW stall limited commercial wind turbines. The types were a change in tip planform, and a single-element and double-element nonplannar tip extension (winglets). Approximately 270 h of performance data were collected over a three-month period. The sampling interval was 2.4 s; thus over 400,000 raw data points were logged.more » Results for each of the three new tip devices, compared with the original tip, showed a small decrease (of the order of 1 kW) in power output over the measured range of wind speeds from cut-in at about 4 m/s to over 20 m/s, well into the stall limiting region. For aircraft wing tip devices, favorable tip shapes have been reported and it is likely that the tip devices tested in this program did not improve rotor performance because they were not optimally adjusted. The computer model used does not have adequate lifting surface resolution or accuracy to design these small winglet extensions.« less
Profiling an application for power consumption during execution on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2012-08-21
Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.
Blind multirigid retrospective motion correction of MR images.
Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard
2015-04-01
Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.
Improving the Efficiency of Natural Raw Water Pretreatment at Thermal Power Stations
NASA Astrophysics Data System (ADS)
Dremicheva, E. S.
2018-02-01
In the treatment of make-up water for thermal power stations (TPS) and heat networks, raw water from surface water bodies is used. It contains organic and mineral pollutants in the form of particulates or colloids. Coagulation and flocculation are reagent methods for removing these pollutants from water. Chemicals are used to assist in the formation of large structured flakes that are removed easily from water. The Kuibyshev water reservoir was selected as the object of investigation. Basic physical and chemical properties of the raw water are presented. The application of various coagulating agents, their mixtures in different proportions, and flocculating agents for clarifying the Volga water was examined. The required dose of a coagulant or flocculant was determined based on test coagulation of the treated water. Aluminum sulfate and iron (III) chloride were used a coagulant, and Praestol 2500 (nonionic) as a flocculant. A method of enhancement of coagulation and flocculation by injecting air into the treated water is examined. The results of experimental investigation of the effect of water treatment method on water quality indices, such as alkalinity, pH, iron content, suspended material content, and permanganate value, are presented. It is demonstrated that joint use of ironand aluminum containing coagulation agents brings the coagulation conditions closer to the optimum ones. Aeration does not affect the coagulation process. The methods for supplying air to a clarifier are proposed for practical implementation.
A comparison of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Precup, Ovidiu V.; Iori, Giulia
2004-12-01
On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
Educational aspects of molecular simulation
NASA Astrophysics Data System (ADS)
Allen, Michael P.
This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.
Bhawamai, Sassy; Lin, Shyh-Hsiang; Hou, Yuan-Yu; Chen, Yue-Hwa
2016-01-01
Evidence on biological activities of cooked black rice is limited. This study examined the effects of washing and cooking on the bioactive ingredients and biological activities of black rice. Cooked rice was prepared by washing 0-3 times followed by cooking in a rice cooker. The acidic methanol extracts of raw and cooked rice were used for the analyses. Raw black rice, both washed and unwashed, had higher contents of polyphenols, anthocyanins, and cyanidin-3-glucoside (C3G), but lower protocatechuic acid (PA), than did cooked samples. Similarly, raw rice extracts were higher in ferric-reducing antioxidant power (FRAP) activities than extracts of cooked samples. Nonetheless, extracts of raw and cooked rice showed similar inhibitory potencies on nitric oxide, tumor necrosis factor-α, and interleukin-6 productions in lipopolysaccharide-activated macrophages, whereas equivalent amounts of C3G and PA did not possess such inhibitory effects. Thermal cooking decreased total anthocyanin and C3G contents and the FRAP antioxidative capacity, but did not affect anti-inflammatory activities of black rice. Neither C3G nor PA contributed to the anti-inflammatory activity of black rice.
Ding, Mingya; Li, Zhen; Yu, Xie-An; Zhang, Dong; Li, Jin; Wang, Hui; He, Jun; Gao, Xiu-Mei; Chang, Yan-Xu
2018-07-15
This study aimed to clarify the difference between the effective compounds of raw and processed Farfarae flos using a network pharmacology-integrated metabolomics strategy. First, metabolomics data were obtained by ultra high-performance liquid chromatography-quadrupole-time of flight mass spectrometry (UHPLC-Q-TOF/MS). Then, metabolomics analysis was developed to screen for the influential compounds that were different between raw and processed Farfarae flos. Finally, a network pharmacology approach was applied to verify the activity of the screened compounds. As a result, 4 compounds (chlorogenic acid, caffeic acid, rutin and isoquercitrin) were successfully screened, identified, quantified and verified as the most influential effective compounds. They may synergistically inhibit the p38, JNK and ERK-mediated pathways, which would induce the inhibition of the expression of the IFA virus. The results revealed that the proposed network pharmacology-integrated metabolomics strategy was a powerful tool for discovering the effective compounds that were responsible for the difference between raw and processed Chinese herbs. Copyright © 2018 Elsevier B.V. All rights reserved.
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.
2013-09-10
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
High-performance liquid-catalyst fuel cell for direct biomass-into-electricity conversion.
Liu, Wei; Mu, Wei; Deng, Yulin
2014-12-01
Herein, we report high-performance fuel cells that are catalyzed solely by polyoxometalate (POM) solution without any solid metal or metal oxide. The novel design of the liquid-catalyst fuel cells (LCFC) changes the traditional gas-solid-surface heterogeneous reactions to liquid-catalysis reactions. With this design, raw biomasses, such as cellulose, starch, and even grass or wood powders can be directly converted into electricity. The power densities of the fuel cell with switchgrass (dry powder) and bush allamanda (freshly collected) are 44 mW cm(-2) and 51 mW cm(-2) respectively. For the cellulose-based biomass fuel cell, the power density is almost 3000 times higher than that of cellulose-based microbial fuel cells. Unlike noble-metal catalysts, POMs are tolerant to most organic and inorganic contaminants. Therefore, almost any raw biomass can be used directly to produce electricity without prior purification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kurd, Forouzan; Samavati, Vahid
2015-03-01
Polysaccharides from Spirulina platensis algae (SP) were extracted by ultrasound-assisted extraction procedure. The optimal conditions for ultrasonic extraction of SP were determined by response surface methodology. The four parameters were, extraction time (X1), extraction temperature (X2), ultrasonic power (X3) and the ratio of water to raw material (X4), respectively. The experimental data obtained were fitted to a second-order polynomial equation. The optimum conditions were extraction time of 25 min, extraction temperature 85°C, ultrasonic power 90 W and ratio of water to raw material 20 mL/g. Under these optimal conditions, the experimental yield was 13.583±0.51%, well matched with the predicted models with the coefficients of determination (R2) of 0.9971. Then, we demonstrated that SP polysaccharides had strong scavenging activities in vitro on DPPH and hydroxyl radicals. Overall, SP may have potential applications in the medical and food industries. Copyright © 2015 Elsevier B.V. All rights reserved.
Recent Progress In Infrared Chalcogenide Glass Fibers
NASA Astrophysics Data System (ADS)
Bornstein, A.; Croitoru, N.; Marom, E.
1984-10-01
Chalcogenide glasses containing elements like As, Ge, Sb and Se have been prepared. A new technique of preparing the raw material and subsequently drawing fibers has been devel-oped in order to avoid the forming of oxygen compounds. The fibers have been drawn by cru-cible and rod method from oxygen free raw material inside an Ar atmosphere glove box. The fibers drawn to date with air and glass cladding have a diameter of 50-500 pm and length of several meterd. Preliminary attenuation measurements indicate that the attentuation is better than 0.1 dB/cm and it is not affected even when the fiber is bent to 2 cm circular radius. The fibes were testes a CO laser beam and were not damaged at power densities below 10 kW/2cm2 CW &100 kw/cm using short pulses 75 n sec. The transmitted power density was 0.8 kW/cm2 which is an appropriate value to the needed for cutting and ablation of human tissues.
Dataset of Scientific Inquiry Learning Environment
ERIC Educational Resources Information Center
Ting, Choo-Yee; Ho, Chiung Ching
2015-01-01
This paper presents the dataset collected from student interactions with INQPRO, a computer-based scientific inquiry learning environment. The dataset contains records of 100 students and is divided into two portions. The first portion comprises (1) "raw log data", capturing the student's name, interfaces visited, the interface…
Insect pest management for raw commodities during storage
USDA-ARS?s Scientific Manuscript database
This book chapter provides an overview of the pest management decision-making process during grain storage. An in-depth discussion of sampling methods, cost-benefit analysis, expert systems, consultants and the use of computer simulation models is provided. Sampling is essential to determine if pest...
Crawford, John R; Garthwaite, Paul H; Lawrie, Caroline J; Henry, Julie D; MacDonald, Marie A; Sutherland, Jane; Sinha, Priyanka
2009-06-01
A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
NASA Astrophysics Data System (ADS)
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
Networking DEC and IBM computers
NASA Technical Reports Server (NTRS)
Mish, W. H.
1983-01-01
Local Area Networking of DEC and IBM computers within the structure of the ISO-OSI Seven Layer Reference Model at a raw signaling speed of 1 Mops or greater are discussed. After an introduction to the ISO-OSI Reference Model nd the IEEE-802 Draft Standard for Local Area Networks (LANs), there follows a detailed discussion and comparison of the products available from a variety of manufactures to perform this networking task. A summary of these products is presented in a table.
1981-03-12
agriculture, raw materials, energy sources, computers, lasers , space and aeronautics, high energy physics, and genetics. The four modernizations will be...accomp- lished and the strong socialist country that is born at the end of the century will be a keyhole for the promotion of science and technology...Process (FNP). Its purpose is to connect with the Kiautsu University computer (model 108) and then to connect a data terminal . This will make a
Reducing power consumption during execution of an application on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-06-05
Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.
Genetics and genomics in bioenergy and bioproducts
James S. McLaren; Thomas W. Jeffries
2003-01-01
There is now widespread acknowledgment that renewable bioresources, as a platform for raw inputs and as a source of powerful biocatalysts, have considerable potential to support sustainable economic growth, increase national energy security, strengthen rural communities, and minimize anthropogenic effects on the environment. Clearly, the transition to renewable...
Code of Federal Regulations, 2012 CFR
2012-07-01
... perfluoropolyether, and any hydrofluoropolyether. Fossil fuel means natural gas, petroleum, coal, or any form of... generator. Emergency equipment means any auxiliary fossil fuel-powered equipment, such as a fire pump, that... the kiln to produce heat to form the clinker product. Feedstock means raw material inputs to a process...
Code of Federal Regulations, 2011 CFR
2011-07-01
... perfluoropolyether, and any hydrofluoropolyether. Fossil fuel means natural gas, petroleum, coal, or any form of... generator. Emergency equipment means any auxiliary fossil fuel-powered equipment, such as a fire pump, that... the kiln to produce heat to form the clinker product. Feedstock means raw material inputs to a process...
Code of Federal Regulations, 2014 CFR
2014-07-01
... perfluoropolyether, and any hydrofluoropolyether. Fossil fuel means natural gas, petroleum, coal, or any form of... generator. Emergency equipment means any auxiliary fossil fuel-powered equipment, such as a fire pump, that... the kiln to produce heat to form the clinker product. Feedstock means raw material inputs to a process...
Code of Federal Regulations, 2013 CFR
2013-07-01
... perfluoropolyether, and any hydrofluoropolyether. Fossil fuel means natural gas, petroleum, coal, or any form of... generator. Emergency equipment means any auxiliary fossil fuel-powered equipment, such as a fire pump, that... the kiln to produce heat to form the clinker product. Feedstock means raw material inputs to a process...
Teaching the Holocaust through Film.
ERIC Educational Resources Information Center
Michalczyk, John J.
The use of Holocaust-related films and Holocaust survivors as classroom resources is analyzed. The perspective and function of four film genres are outlined as follows. Newsreels, made by the Nazis to chronicle their "progress," provide powerful raw footage of the concentration camp experience. Documentaries, generally made by Allied…
A Practical Theory of Micro-Solar Power Sensor Networks
2009-04-20
Simulation Platform TOSSIM [LLWC03] ns-2 Matlab C++ AVRORA [TLP05] Reference Hardware Mica2 WINS, Medusa Mica Mica2, Medusa Mica2 Simulated Power Power...scale. From this raw data, we can 162 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 Correlation coefficient F re qu en cy Histogram of correlation...0.5 0.6 0.7 0.8 0.9 1 0 1 2 Correlation coefficient F re qu en cy Histogram of correlation coefficient with solar radiation measurement (Turbidity at
Near Real-Time Processing and Archiving of GPS Surveys for Crustal Motion Monitoring
NASA Astrophysics Data System (ADS)
Crowell, B. W.; Bock, Y.
2008-12-01
We present an inverse instantaneous RTK method for rapidly processing and archiving GPS data for crustal motion surveys that gives positional accuracy similar to traditional post-processing methods. We first stream 1 Hz data from GPS receivers over Bluetooth to Verizon XV6700 smartphones equipped with Geodetics, Inc. RTD Rover software. The smartphone transmits raw receiver data to a real-time server at the Scripps Orbit and Permanent Array Center (SOPAC) running RTD Pro. At the server, instantaneous positions are computed every second relative to the three closest base stations in the California Real Time Network (CRTN), using ultra-rapid orbits produced by SOPAC, the NOAATrop real-time tropospheric delay model, and ITRF2005 coordinates computed by SOPAC for the CRTN stations. The raw data are converted on-the-fly to RINEX format at the server. Data in both formats are stored on the server along with a file of instantaneous positions, computed independently at each observation epoch. The single-epoch instantaneous positions are continuously transmitted back to the field surveyor's smartphone, where RTD Rover computes a median position and interquartile range for each new epoch of observation. The best-fit solution is the last median position and is available as soon as the survey is completed. We describe how we used this method to process 1 Hz data from the February, 2008 Imperial Valley GPS survey of 38 geodetic monuments established by Imperial College, London in the 1970's, and previously measured by SOPAC using rapid-static GPS methods in 1993, 1999 and 2000, as well as 14 National Geodetic Survey (NGS) monuments. For redundancy, each monument was surveyed for about 15 minutes at least twice and at staggered intervals using two survey teams operating autonomously. Archiving of data and the overall project at SOPAC is performed using the PGM software, developed by the California Spatial Reference Center (CSRC) for the National Geodetic Survey (NGS). The importation of raw receiver data, site metadata and antenna height information is performed using PGM client software running on the same PDA running RTD Rover or laptop, and uploaded to the PGM server where the raw data are converted to RINEX format. The campaign information is then published online, where all of the campaign information can be accessed such as start and stop times, equipment information, RINEX and solution SINEX files, observer information and baseline information for network adjustments.
Energy-efficient digital and wireless IC design for wireless smart sensing
NASA Astrophysics Data System (ADS)
Zhou, Jun; Huang, Xiongchuan; Wang, Chao; Tae-Hyoung Kim, Tony; Lian, Yong
2017-10-01
Wireless smart sensing is now widely used in various applications such as health monitoring and structural monitoring. In conventional wireless sensor nodes, significant power is consumed in wirelessly transmitting the raw data. Smart sensing adds local intelligence to the sensor node and reduces the amount of wireless data transmission via on-node digital signal processing. While the total power consumption is reduced compared to conventional wireless sensing, the power consumption of the digital processing becomes as dominant as wireless data transmission. This paper reviews the state-of-the-art energy-efficient digital and wireless IC design techniques for reducing the power consumption of the wireless smart sensor node to prolong battery life and enable self-powered applications.
NASA Technical Reports Server (NTRS)
Habegger, L. J.; Gasper, J. R.; Brown, C.
1980-01-01
Data readily available from the literature were used to make an initial comparison of the health and safety risks of a fission power system with fuel reprocessing; a combined-cycle coal power system with a low-Btu gasifier and open-cycle gas turbine; a central-station, terrestrial, solar photovoltaic power system; the satellite power system; and a first-generation fusion system. The assessment approach consists of the identification of health and safety issues in each phase of the energy cycle from raw material extraction through electrical generation, waste disposal, and system deactivation; quantitative or qualitative evaluation of impact severity; and the rating of each issue with regard to known or potential impact level and level of uncertainty.
Ogata, Y; Nishizawa, K
1995-10-01
An automated smear counting and data processing system for a life science laboratory was developed to facilitate routine surveys and eliminate human errors by using a notebook computer. This system was composed of a personal computer, a liquid scintillation counter and a well-type NaI(Tl) scintillation counter. The radioactivity of smear samples was automatically measured by these counters. The personal computer received raw signals from the counters through an interface of RS-232C. The software for the computer evaluated the surface density of each radioisotope and printed out that value along with other items as a report. The software was programmed in Pascal language. This system was successfully applied to routine surveys for contamination in our facility.
Raw Cow Milk Bacterial Population Shifts Attributable to Refrigeration
Lafarge, Véronique; Ogier, Jean-Claude; Girard, Victoria; Maladen, Véronique; Leveau, Jean-Yves; Gruss, Alexandra; Delacroix-Buchet, Agnès
2004-01-01
We monitored the dynamic changes in the bacterial population in milk associated with refrigeration. Direct analyses of DNA by using temporal temperature gel electrophoresis (TTGE) and denaturing gradient gel electrophoresis (DGGE) allowed us to make accurate species assignments for bacteria with low-GC-content (low-GC%) (<55%) and medium- or high-GC% (>55%) genomes, respectively. We examined raw milk samples before and after 24-h conservation at 4°C. Bacterial identification was facilitated by comparison with an extensive bacterial reference database (∼150 species) that we established with DNA fragments of pure bacterial strains. Cloning and sequencing of fragments missing from the database were used to achieve complete species identification. Considerable evolution of bacterial populations occurred during conservation at 4°C. TTGE and DGGE are shown to be a powerful tool for identifying the main bacterial species of the raw milk samples and for monitoring changes in bacterial populations during conservation at 4°C. The emergence of psychrotrophic bacteria such as Listeria spp. or Aeromonas hydrophila is demonstrated. PMID:15345453
Giaretta, Nicola; Di Giuseppe, Antonella M A; Lippert, Martina; Parente, Augusto; Di Maro, Antimo
2013-12-01
The identification of meat animal species used in raw burgers is very important with respect to economic and religious considerations. Therefore, international supervisory bodies have implemented procedures to control the employed meat species. In this paper we propose myoglobin as a powerful molecular marker to evaluate the presence of non-declared meat addition in raw beef burgers by using ultra-performance liquid chromatography (UPLC) for the separation and identification of edible animal species (beef, chicken, horse, ostrich, pig and water buffalo). Meat samples were pre-treated with sodium nitrite to transform oxymyoglobin and deoxymyoglobin to the more stable metmyoglobin. The developed method was validated, preparing mixtures with different percentages of pork and beef minced meat. The obtained results show that using myoglobin as marker, 5% (25 mg/500 mg) of pork or beef meat can be detected in premixed minced meat samples. Copyright © 2013 Elsevier Ltd. All rights reserved.
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows
Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio
2012-01-01
Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896
Common computational properties found in natural sensory systems
NASA Astrophysics Data System (ADS)
Brooks, Geoffrey
2009-05-01
Throughout the animal kingdom there are many existing sensory systems with capabilities desired by the human designers of new sensory and computational systems. There are a few basic design principles constantly observed among these natural mechano-, chemo-, and photo-sensory systems, principles that have been proven by the test of time. Such principles include non-uniform sampling and processing, topological computing, contrast enhancement by localized signal inhibition, graded localized signal processing, spiked signal transmission, and coarse coding, which is the computational transformation of raw data using broadly overlapping filters. These principles are outlined here with references to natural biological sensory systems as well as successful biomimetic sensory systems exploiting these natural design concepts.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
A Half-Time Mill School. Bulletin, 1919, No. 6
ERIC Educational Resources Information Center
Foght, H. W.
1919-01-01
Until a few years ago the Southern States were considered in the main an agricultural section. More recently the advantageous location in respect to raw materials, minerals, water, and electric power of the South Atlantic States has occasioned an almost unprecedented growth in manufacturing industries. Particularly has the cotton manufacturing…
40 CFR 89.401 - Scope; applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subpart B of this part. (b) Exhaust gases, either raw or dilute, are sampled while the test engine is operated using the appropriate test cycle on an engine dynamometer. The exhaust gases receive specific... the power output during each mode. Emissions are reported as grams per kilowatt hour (g/kW-hr). (c...
40 CFR 89.301 - Scope; applicability.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subpart B of part 89. (b) Exhaust gases, either raw or dilute, are sampled while the test engine is operated using an 8-mode test cycle on an engine dynamometer. The exhaust gases receive specific component analysis determining concentration of pollutant, exhaust volume, the fuel flow, and the power output during...
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
Biomass Characterization | Bioenergy | NREL
analytical methods for biomass characterization available for downloading. View the Biomass Compositional Methods Molecular Beam Mass Spectrometry Photo of a man in front of multiple computer screens that present Characterization of Biomass We develop new methods and tools to understand the chemical composition of raw biomass
Alternative Smoothing and Scaling Strategies for Weighted Composite Scores
ERIC Educational Resources Information Center
Moses, Tim
2014-01-01
In this study, smoothing and scaling approaches are compared for estimating subscore-to-composite scaling results involving composites computed as rounded and weighted combinations of subscores. The considered smoothing and scaling approaches included those based on raw data, on smoothing the bivariate distribution of the subscores, on smoothing…
Total organic carbon (TOC) and dissolved organic carbon (DOC) have long been used to estimate the amount of natural organic matter (NOM) found in raw and finished drinking water. In recent years, computer automation and improved instrumental analysis technologies have created a ...
A Systematic Model for Evaluating Professorial Publications
ERIC Educational Resources Information Center
Yoda, Koji
1977-01-01
A model for reviewing both quality and quantity of professorial publications establishes a variety of criteria that ideal publications should meet, provides for the assignment of relative weight to each criterion, and establishes a rating system for computing a raw score for each set of faculty publications being reviewed. (LBH)
A Decision Support System for Energy Policy Analysis.
1980-07-01
new realities or hypothesized realities to the modeling system. Lack of a PDL would make the system inflexible and accessible only to a patient ... expert . Certainly, given the present ratio of costs of personnel to costs of computers, the alternative of presenting data in its raw form is acceptable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, H.L.
Much of the polymer composites industry is built around the thermochemical conversion of raw material into useful composites. The raw materials (molding compound, prepreg) often are made up of thermosetting resins and small fibers or particles. While this conversion can follow a large number of paths, only a few paths are efficient, economical and lead to desirable composite properties. Processing instrument (P/I) technology enables a computer to sense and interpret changes taking place during the cure of prepreg or molding compound. P/I technology has been used to make estimates of gel time and cure time, thermal diffusivity measurements and transitionmore » temperature measurements. Control and sensing software is comparatively straightforward. The interpretation of results with appropriate software is under development.« less
Chiron: translating nanopore raw signal directly into nucleotide sequence using deep learning.
Teng, Haotian; Cao, Minh Duc; Hall, Michael B; Duarte, Tania; Wang, Sheng; Coin, Lachlan J M
2018-05-01
Sequencing by translocating DNA fragments through an array of nanopores is a rapidly maturing technology that offers faster and cheaper sequencing than other approaches. However, accurately deciphering the DNA sequence from the noisy and complex electrical signal is challenging. Here, we report Chiron, the first deep learning model to achieve end-to-end basecalling and directly translate the raw signal to DNA sequence without the error-prone segmentation step. Trained with only a small set of 4,000 reads, we show that our model provides state-of-the-art basecalling accuracy, even on previously unseen species. Chiron achieves basecalling speeds of more than 2,000 bases per second using desktop computer graphics processing units.
Wave Energy Prize - 1/20th Testing - Oscilla Power
Scharmen, Wesley
2016-09-16
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the Oscilla Power team, including the 1/20th Test Plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the WEPrize winners.
Wave Energy Prize - 1/20th Testing - RTI Wave Power
Scharmen, Wesley
2016-09-30
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the RTI Wave Power team, including the 1/20th Test Plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
NASA Astrophysics Data System (ADS)
Ambarita, H.; Sinulingga, E. P.; Nasution, M. KM; Kawai, H.
2017-03-01
In this work, a compression ignition (CI) engine is tested in dual-fuel mode (Diesel-Raw biogas). The objective is to examine the performance and emission characteristics of the engine when some of the diesel oil is replaced by biogas. The specifications of the CI engine are air cooled single horizontal cylinder, four strokes, and maximum output power of 4.86 kW. It is coupled with a synchronous three phase generator. The load, engine revolution, and biogas flow rate are varied from 600 W to 1500 W, 1000 rpm to 1500 rpm, 0 to 6 L/minute, respectively. The electric power, specific fuel consumption, thermal efficiency, gas emission, and diesel replacement ratio are analyzed. The results show that there is no significant difference of the power resulted by CI run on dual-fuel mode in comparison with pure diesel mode. However, the specific fuel consumption and efficiency decrease significantly as biogas flow rate increases. On the other hand, emission of the engine on dual-fuel mode is better. The main conclusion can be drawn is that CI engine without significant modification can be operated perfectly in dual-fuel mode and diesel oil consumption can be decreased up to 87.5%.
Virtual head rotation reveals a process of route reconstruction from human vestibular signals
Day, Brian L; Fitzpatrick, Richard C
2005-01-01
The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled. PMID:16002439
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Brad M.; Nathan, Diane L.; Wang Yan
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') andmore » vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r= 0.82, p < 0.001) and processed (r= 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r= 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's {kappa}{>=} 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.« less
Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina
2012-01-01
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Results: Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). Conclusions: The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies. PMID:22894417
Keller, Brad M; Nathan, Diane L; Wang, Yan; Zheng, Yuanjie; Gee, James C; Conant, Emily F; Kontos, Despina
2012-08-01
The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., "FOR PROCESSING") and vendor postprocessed (i.e., "FOR PRESENTATION"), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then aggregated into a final dense tissue segmentation that is used to compute breast PD%. Our method is validated on a group of 81 women for whom bilateral, mediolateral oblique, raw and processed screening digital mammograms were available, and agreement is assessed with both continuous and categorical density estimates made by a trained breast-imaging radiologist. Strong association between algorithm-estimated and radiologist-provided breast PD% was detected for both raw (r = 0.82, p < 0.001) and processed (r = 0.85, p < 0.001) digital mammograms on a per-breast basis. Stronger agreement was found when overall breast density was assessed on a per-woman basis for both raw (r = 0.85, p < 0.001) and processed (0.89, p < 0.001) mammograms. Strong agreement between categorical density estimates was also seen (weighted Cohen's κ ≥ 0.79). Repeated measures analysis of variance demonstrated no statistically significant differences between the PD% estimates (p > 0.1) due to either presentation of the image (raw vs processed) or method of PD% assessment (radiologist vs algorithm). The proposed fully automated algorithm was successful in estimating breast percent density from both raw and processed digital mammographic images. Accurate assessment of a woman's breast density is critical in order for the estimate to be incorporated into risk assessment models. These results show promise for the clinical application of the algorithm in quantifying breast density in a repeatable manner, both at time of imaging as well as in retrospective studies.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E
2013-02-05
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
Budget-based power consumption for application execution on a plurality of compute nodes
Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D
2012-10-23
Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.
How Data Becomes Physics: Inside the RACF
Ernst, Michael; Rind, Ofer; Rajagopalan, Srini; Lauret, Jerome; Pinkenburg, Chris
2018-06-22
The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energyâs (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.
User's guide to the Residual Gas Analyzer (RGA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Artman, S.A.
1988-08-04
The Residual Gas Analyzer (RGA), a Model 100C UTI quadrupole mass spectrometer, measures the concentrations of selected masses in the Fusion Energy Division's (FED) Advanced Toroidal Facility (ATF). The RGA software is a VAX FORTRAN computer program which controls the experimental apparatus, records the raw data, performs data reduction, and plots the data. The RGA program allows data to be collected from an RGA on ATF or from either of two RGAs in the laboratory. In the laboratory, the RGA diagnostic plays an important role in outgassing studied on various candidate materials for fusion experiments. One such material, graphite, ismore » being used more often in fusion experiments due to its ability to withstand high power loads. One of the functions of the RGA diagnostic is aid in the determination of the best grade of graphite to be used in these experiments and to study the procedures used to condition it. A procedure of particular interest involves baking the graphite sample in order to remove impurities that may be present in it. These impurities can be studied while in the ATF plasma or while being baked and outgassed in the laboratory. The Residual Gas Analyzer is a quadrupole mass spectrometer capable of scanning masses ranging in size from 1 atomic mass unit (amu) to 300 amu while under computer control. The procedure for collecting data for a particular mass is outlined.« less
Westine, Carl D; Spybrook, Jessaca; Taylor, Joseph A
2013-12-01
Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163-.172, and reading, .099-.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Science educational researchers should utilize design parameters derived from science achievement outcomes. © The Author(s) 2014.
3 CFR 8892 - Proclamation 8892 of October 19, 2012. National Forest Products Week, 2012
Code of Federal Regulations, 2013 CFR
2013-01-01
... role in powering our progress. These rich spaces have provided clean air and water for our communities... to last. Woodlands encourage tourism and recreation that create jobs and growth in our rural communities. They provide the raw materials for products we use every day, and they help produce clean...
Student Rights and Freedoms: Toward Implementation Models.
ERIC Educational Resources Information Center
Collins, Charles C.
Faculty, administration, and the board of trustees are dubious champions of student rights and freedoms. Without reliable protectors within the academic community, students have the options of securing their rights and freedoms by (1) the exercise of raw power, (2) finding a means to participate in the decision making process, or (3) seeking…
Mass spectrometry: Raw protein from the top down
NASA Astrophysics Data System (ADS)
Breuker, Kathrin
2018-02-01
Mass spectrometry is a powerful technique for analysing proteins, yet linking higher-order protein structure to amino acid sequence and post-translational modifications is far from simple. Now, a native top-down method has been developed that can provide information on higher-order protein structure and different proteoforms at the same time.
Tian, Suyang; Hao, Changchun; Xu, Guangkuan; Yang, Juanjuan; Sun, Runguang
2017-10-01
In this study, polysaccharides from Angelica sinensis were extracted using the ultrasound-assisted extraction method. Based on the results of single factor experiments and orthogonal tests, three independent variables-water/raw material ratio, ultrasound time, and ultrasound power-were selected for investigation. Then, we used response surface methodology to optimize the extraction conditions. The experimental data were fitted to a quadratic equation using multiple regression analysis, and the optimal conditions were as follows: water/raw material ratio, 43.31 mL/g; ultrasonic time, 28.06 minutes; power, 396.83 W. Under such conditions, the polysaccharide yield was 21.89±0.21%, which was well matched with the predicted yield. In vitro assays, scavenging activity of superoxide anion radicals, hydroxyl radicals, and 2,2-diphenyl-1-picry-hydrazyl radical showed that polysaccharides had certain antioxidant activities and that hydroxyl radicals have a remarkable scavenging capability. Therefore, these studies provide reference for further research and rational development of A. sinensis polysaccharide. Copyright © 2016. Published by Elsevier B.V.
Furniture rough mill costs evaluated by computer simulation
R. Bruce Anderson
1983-01-01
A crosscut-first furniture rough mill was simulated to evaluate processing and raw material costs on an individual part basis. Distributions representing the real-world characteristics of lumber, equipment feed speeds, and processing requirements are programed into the simulation. Costs of parts from a specific cutting bill are given, and effects of lumber input costs...
CT Imaging of Hardwood Logs for Lumber Production
Daniel L. Schmoldt; Pei Li; A. Lynn Abbott
1996-01-01
Hardwood sawmill operators need to improve the conversion of raw material (logs) into lumber. Internal log scanning provides detailed information that can aid log processors in improving lumber recovery. However, scanner data (i.e. tomographic images) need to be analyzed prior to presentation to saw operators. Automatic labeling of computer tomography (CT) images is...
An Investigation of Data Overload in Team-Based Distributed Cognition Systems
ERIC Educational Resources Information Center
Hellar, David Benjamin
2009-01-01
The modern military command center is a hybrid system of computer automated surveillance and human oriented decision making. In these distributed cognition systems, data overload refers simultaneously to the glut of raw data processed by information technology systems and the dearth of actionable knowledge useful to human decision makers.…
1981-04-28
on initial doses. Residual doses are determined through an automiated procedure that utilizes raw data in regression analyses to fit space-time models...show their relationship to the observer positions. The computer-calculated doses do not reflect the presence of the human body in the radiological
EOS MLS Level 1B Data Processing, Version 2.2
NASA Technical Reports Server (NTRS)
Perun, Vincent; Jarnot, Robert; Pickett, Herbert; Cofield, Richard; Schwartz, Michael; Wagner, Paul
2009-01-01
A computer program performs level- 1B processing (the term 1B is explained below) of data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS), which is an instrument aboard the Aura spacecraft. This software accepts, as input, the raw EOS MLS scientific and engineering data and the Aura spacecraft ephemeris and attitude data. Its output consists of calibrated instrument radiances and associated engineering and diagnostic data. [This software is one of several computer programs, denoted product generation executives (PGEs), for processing EOS MLS data. Starting from level 0 (representing the aforementioned raw data, the PGEs and their data products are denoted by alphanumeric labels (e.g., 1B and 2) that signify the successive stages of processing.] At the time of this reporting, this software is at version 2.2 and incorporates improvements over a prior version that make the code more robust, improve calibration, provide more diagnostic outputs, improve the interface with the Level 2 PGE, and effect a 15-percent reduction in file sizes by use of data compression.
Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.
Stockton, David B; Santamaria, Fidel
2017-10-01
We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.
Item response theory analysis of the mechanics baseline test
NASA Astrophysics Data System (ADS)
Cardamone, Caroline N.; Abbott, Jonathan E.; Rayyan, Saif; Seaton, Daniel T.; Pawl, Andrew; Pritchard, David E.
2012-02-01
Item response theory is useful in both the development and evaluation of assessments and in computing standardized measures of student performance. In item response theory, individual parameters (difficulty, discrimination) for each item or question are fit by item response models. These parameters provide a means for evaluating a test and offer a better measure of student skill than a raw test score, because each skill calculation considers not only the number of questions answered correctly, but the individual properties of all questions answered. Here, we present the results from an analysis of the Mechanics Baseline Test given at MIT during 2005-2010. Using the item parameters, we identify questions on the Mechanics Baseline Test that are not effective in discriminating between MIT students of different abilities. We show that a limited subset of the highest quality questions on the Mechanics Baseline Test returns accurate measures of student skill. We compare student skills as determined by item response theory to the more traditional measurement of the raw score and show that a comparable measure of learning gain can be computed.
Analysis of metabolomics datasets with high-performance computing and metabolite atlases
Yao, Yushu; Sun, Terence; Wang, Tony; ...
2015-07-20
Even with the widespread use of liquid chromatography mass spectrometry (LC/MS) based metabolomics, there are still a number of challenges facing this promising technique. Many, diverse experimental workflows exist; yet there is a lack of infrastructure and systems for tracking and sharing of information. Here, we describe the Metabolite Atlas framework and interface that provides highly-efficient, web-based access to raw mass spectrometry data in concert with assertions about chemicals detected to help address some of these challenges. This integration, by design, enables experimentalists to explore their raw data, specify and refine features annotations such that they can be leveraged formore » future experiments. Fast queries of the data through the web using SciDB, a parallelized database for high performance computing, make this process operate quickly. Furthermore, by using scripting containers, such as IPython or Jupyter, to analyze the data, scientists can utilize a wide variety of freely available graphing, statistics, and information management resources. In addition, the interfaces facilitate integration with systems biology tools to ultimately link metabolomics data with biological models.« less
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
Software to Control and Monitor Gas Streams
NASA Technical Reports Server (NTRS)
Arkin, C.; Curley, Charles; Gore, Eric; Floyd, David; Lucas, Damion
2012-01-01
This software package interfaces with various gas stream devices such as pressure transducers, flow meters, flow controllers, valves, and analyzers such as a mass spectrometer. The software provides excellent user interfacing with various windows that provide time-domain graphs, valve state buttons, priority- colored messages, and warning icons. The user can configure the software to save as much or as little data as needed to a comma-delimited file. The software also includes an intuitive scripting language for automated processing. The configuration allows for the assignment of measured values or calibration so that raw signals can be viewed as usable pressures, flows, or concentrations in real time. The software is based on those used in two safety systems for shuttle processing and one volcanic gas analysis system. Mass analyzers typically have very unique applications and vary from job to job. As such, software available on the market is usually inadequate or targeted on a specific application (such as EPA methods). The goal was to develop powerful software that could be used with prototype systems. The key problem was to generalize the software to be easily and quickly reconfigurable. At Kennedy Space Center (KSC), the prior art consists of two primary methods. The first method was to utilize Lab- VIEW and a commercial data acquisition system. This method required rewriting code for each different application and only provided raw data. To obtain data in engineering units, manual calculations were required. The second method was to utilize one of the embedded computer systems developed for another system. This second method had the benefit of providing data in engineering units, but was limited in the number of control parameters.
The Value of Humans in the Operational River Forecasting Enterprise
NASA Astrophysics Data System (ADS)
Pagano, T. C.
2012-04-01
The extent of human control over operational river forecasts, such as by adjusting model inputs and outputs, varies from nearly completely automated systems to those where forecasts are generated after discussion among a group of experts. Historical and realtime data availability, the complexity of hydrologic processes, forecast user needs, and forecasting institution support/resource availability (e.g. computing power, money for model maintenance) influence the character and effectiveness of operational forecasting systems. Automated data quality algorithms, if used at all, are typically very basic (e.g. checks for impossible values); substantial human effort is devoted to cleaning up forcing data using subjective methods. Similarly, although it is an active research topic, nearly all operational forecasting systems struggle to make quantitative use of Numerical Weather Prediction model-based precipitation forecasts, instead relying on the assessment of meteorologists. Conversely, while there is a strong tradition in meteorology of making raw model outputs available to forecast users via the Internet, this is rarely done in hydrology; Operational river forecasters express concerns about exposing users to raw guidance, due to the potential for misinterpretation and misuse. However, this limits the ability of users to build their confidence in operational products through their own value-added analyses. Forecasting agencies also struggle with provenance (i.e. documenting the production process and archiving the pieces that went into creating a forecast) although this is necessary for quantifying the benefits of human involvement in forecasting and diagnosing weak links in the forecasting chain. In hydrology, the space between model outputs and final operational products is nearly unstudied by the academic community, although some studies exist in other fields such as meteorology.
The GLORIE Campaign: Assessment of the Capabilities of Airborne GNSS-R for Land Remote Sensing.
NASA Astrophysics Data System (ADS)
Mangiarotti, S.; Motte, E.; Zribi, M., Sr.; Fanise, P., Sr.
2015-12-01
In June and July 2015 an intensive flight campaign was conducted over the south west of France to test the sensitivity of Global Navigation Satellite System Reflectometry (GNSS-R) to the geophysical parameters of continental surfaces. Namely, the parameters of interest were soil moisture, soil roughness, plant water content, forest biomass and level of inland water bodies and rivers. We used the GLORI polarimetric GNSS-R instrument, collecting raw 10MSPS 2-bit IQ direct (RHCP, zenith) and reflected (RHCP and LHCP, nadir) signals at GPS L1 frequency aboard the ATR-42 aircraft of the SAFIRE fleet. Simultaneous measurement of aircraft attitude and position were recorded. The flight plan included flyovers of several areas of interests, with collocated ground truth measurements of soil moisture, soil roughness, cultivated biomass, and forest biomass. Also flyovers of ponds, lakes and river were included for power calibration and altimetry retrievals. In total, 6 flights were performed between June 19th and July 6th, representing more than 15 hours of raw data. A conventional GNSS-R processing of the data was performed in order to compute the direct and reflected complex waveforms. A preliminary data analysis based on the variations of the ratio of reflected maximum correlation amplitude in the LHCP antenna to direct maximum correlated amplitude shows measurements sensitivity to soil type, land use and incidence angle. Also, first altimetric retrievals using phase-delay techniques shows very promising results over calm waters. Current work is ongoing in order to fit the observed polarimetric measurements with innovative bistatic scattering models capable of taking into account complex geometries and land use configurations.
McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne
2018-04-01
Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distributed adaptive diagnosis of sensor faults using structural response data
NASA Astrophysics Data System (ADS)
Dragos, Kosmas; Smarsly, Kay
2016-10-01
The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.
NanoXCT: a novel technique to probe the internal architecture of pharmaceutical particles.
Wong, Jennifer; D'Sa, Dexter; Foley, Matthew; Chan, John Gar Yan; Chan, Hak-Kim
2014-11-01
To demonstrate the novel application of nano X-ray computed tomography (NanoXCT) for visualizing and quantifying the internal structures of pharmaceutical particles. An Xradia NanoXCT-100, which produces ultra high-resolution and non-destructive imaging that can be reconstructed in three-dimensions (3D), was used to characterize several pharmaceutical particles. Depending on the particle size of the sample, NanoXCT was operated in Zernike Phase Contrast (ZPC) mode using either: 1) large field of view (LFOV), which has a two-dimensional (2D) spatial resolution of 172 nm; or 2) high resolution (HRES) that has a resolution of 43.7 nm. Various pharmaceutical particles with different physicochemical properties were investigated, including raw (2-hydroxypropyl)-beta-cyclodextrin (HβCD), poly (lactic-co-glycolic) acid (PLGA) microparticles, and spray-dried particles that included smooth and nanomatrix bovine serum albumin (BSA), lipid-based carriers, and mannitol. Both raw HβCD and PLGA microparticles had a network of voids, whereas spray-dried smooth BSA and mannitol generally had a single void. Lipid-based carriers and nanomatrix BSA particles resulted in low quality images due to high noise-to-signal ratio. The quantitative capabilities of NanoXCT were also demonstrated where spray-dried mannitol was found to have an average void volume of 0.117 ± 0.247 μm(3) and average void-to-material percentage of 3.5%. The single PLGA particle had values of 1993 μm(3) and 59.3%, respectively. This study reports the first series of non-destructive 3D visualizations of inhalable pharmaceutical particles. Overall, NanoXCT presents a powerful tool to dissect and observe the interior of pharmaceutical particles, including those of a respirable size.
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
An integrated software system for geometric correction of LANDSAT MSS imagery
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II
1984-01-01
A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.
Automatic intrinsic cardiac and respiratory gating from cone-beam CT scans of the thorax region
NASA Astrophysics Data System (ADS)
Hahn, Andreas; Sauppe, Sebastian; Lell, Michael; Kachelrieß, Marc
2016-03-01
We present a new algorithm that allows for raw data-based automated cardiac and respiratory intrinsic gating in cone-beam CT scans. It can be summarized in three steps: First, a median filter is applied to an initially reconstructed volume. The forward projection of this volume contains less motion information and is subtracted from the original projections. This results in new raw data that contain only moving and not static anatomy like bones, that would otherwise impede the cardiac or respiratory signal acquisition. All further steps are applied to these modified raw data. Second, the raw data are cropped to a region of interest (ROI). The ROI in the raw data is determined by the forward projection of a binary volume of interest (VOI) that includes the diaphragm for respiratory gating and most of the edge of the heart for cardiac gating. Third, the mean gray value in this ROI is calculated for every projection and the respiratory/cardiac signal is acquired using a bandpass filter. Steps two and three are carried out simultaneously for 64 or 1440 overlapping VOI inside the body for the respiratory or cardiac signal respectively. The signals acquired from each ROI are compared and the most consistent one is chosen as the desired cardiac or respiratory motion signal. Consistency is assessed by the standard deviation of the time between two maxima. The robustness and efficiency of the method is evaluated using simulated and measured patient data by computing the standard deviation of the mean signal difference between the ground truth and the intrinsic signal.
A Study on Cost Allocation in Nuclear Power Coupled with Desalination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, ManKi; Kim, SeungSu; Moon, KeeHwan
As for a single-purpose desalination plant, there is no particular difficulty in computing the unit cost of the water, which is obtained by dividing the annual total costs by the output of fresh water. When it comes to a dual-purpose plant, cost allocation is needed between the two products. No cost allocation is needed in some cases where two alternatives producing the same water and electricity output are to be compared. In these cases, the consideration of the total cost is then sufficient. This study assumes MED (Multi-Effect Distillation) technology is adopted when nuclear power is coupled with desalination. Themore » total production cost of the two commodities in dual-purpose plant can easily be obtained by using costing methods, if the necessary raw data are available. However, it is not easy to calculate a separate cost for each product, because high-pressure steam plant costs cannot be allocated to one or the other without adopting arbitrary methods. Investigation on power credit method is carried out focusing on the cost allocation of combined benefits due to dual production, electricity and water. The illustrative calculation is taken from Preliminary Economic Feasibility Study of Nuclear Desalination in Madura Island, Indonesia. The study is being performed by BATAN (National Nuclear Energy Agency), KAERI (Korean Atomic Energy Research Institute) and under support of the IAEA (International Atomic Energy Agency) started in the year 2002 in order to perform a preliminary economic feasibility in providing the Madurese with sufficient power and potable water for the public and to support industrialization and tourism in Madura Region. The SMART reactor coupled with MED is considered to be an option to produce electricity and potable water. This study indicates that the correct recognition of combined benefits attributable to dual production is important in carrying out economics of desalination coupled with nuclear power. (authors)« less
MIRO Continuum Calibration for Asteroid Mode
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2011-01-01
MIRO (Microwave Instrument for the Rosetta Orbiter) is a lightweight, uncooled, dual-frequency heterodyne radiometer. The MIRO encountered asteroid Steins in 2008, and during the flyby, MIRO used the Asteroid Mode to measure the emission spectrum of Steins. The Asteroid Mode is one of the seven modes of the MIRO operation, and is designed to increase the length of time that a spectral line is in the MIRO pass-band during a flyby of an object. This software is used to calibrate the continuum measurement of Steins emission power during the asteroid flyby. The MIRO raw measurement data need to be calibrated in order to obtain physically meaningful data. This software calibrates the MIRO raw measurements in digital units to the brightness temperature in Kelvin. The software uses two calibration sequences that are included in the Asteroid Mode. One sequence is at the beginning of the mode, and the other at the end. The first six frames contain the measurement of a cold calibration target, while the last six frames measure a warm calibration target. The targets have known temperatures and are used to provide reference power and gain, which can be used to convert MIRO measurements into brightness temperature. The software was developed to calibrate MIRO continuum measurements from Asteroid Mode. The software determines the relationship between the raw digital unit measured by MIRO and the equivalent brightness temperature by analyzing data from calibration frames. The found relationship is applied to non-calibration frames, which are the measurements of an object of interest such as asteroids and other planetary objects that MIRO encounters during its operation. This software characterizes the gain fluctuations statistically and determines which method to estimate gain between calibration frames. For example, if the fluctuation is lower than a statistically significant level, the averaging method is used to estimate the gain between the calibration frames. If the fluctuation is found to be statistically significant, a linear interpolation of gain and reference power is used to estimate the gain between the calibration frames.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C.M.; DeWall, R.A.; Ljubicic, B.R.
1994-03-01
Yugoslavia`s interest in lignite-water fuel (LWF) stems from its involvement in an unusual power project at Kovin in northern Serbia. In the early 1980s, Electric Power of Serbia (EPS) proposed constructing a 600-MW power plant that would be fueled by lignite found in deposits along and under the Danube River. Trial underwater mining at Kovin proved that the dredging operation is feasible. The dredging method produces a coal slurry containing 85% to 90% water. Plans included draining the water from the coal, drying it, and then burning it in the pulverized coal plant. In looking for alternative ways to utilizemore » the ``wet coal`` in a more efficient and economical way, a consortium of Yugoslavian companies agreed to assess the conversion of dredged lignite into a LWF using hot-water-drying (HWD) technology. HWD is a high-temperature, nonevaporative drying technique carried out under high pressure in water that permanently alters the structure of low-rank coals. Changes effected by the drying process include irreversible removal of moisture, micropore sealing by tar, and enhancement of heating value by removal of oxygen, thus, enhancement of the slurry ability of the coal with water. Physical cleaning results indicated a 51 wt % reduction in ash content with a 76 wt % yield for the lignite. In addition, physical cleaning produced a cleaned slurry that had a higher attainable solids loading than a raw uncleaned coal slurry. Combustion studies were then performed on the raw and physically cleaned samples with the resulting indicating that both samples were very reactive, making them excellent candidates for HWD. Bench-scale results showed that HWD increased energy densities of the two raw lignite samples by approximately 63% and 81%. An order-of-magnitude cost estimate was conducted to evaluate the HWD and pipeline transport of Kovin LWF to domestic and export European markets. Results are described.« less
Building a base map with AutoCAD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flarity, S.J.
1989-12-01
The fundamental step in the exploration process is building a base map. Consequently, any serious computer exploration program should be capable of providing base maps. Data used in constructing base maps are available from commercial sources such as Tobin. and Petroleum Information. These data sets include line and well data, the line data being latitude longitude vectors, and the ell data any identifying text information for well and their locations. AutoCAD is a commercial program useful in building base maps. Its features include infinite zoom and pan capability, layering, block definition, text dialog boxes, and a command language, AutoLisp. AutoLispmore » provides more power by allowing the geologist to modify the way the program works. Three AutoLisp routines presented here allow geologists to construct a geologic base map from raw Tobin data. The first program, WELLS.LSP, sets up the map environment for the subsequent programs, WELLADD.LSP and LINEADD.LSP. Welladd.lisp reads the Tobin data and spots the well symbols and the identifying information. Lineadd.lsp performs the same task on line and textural information contained within the data set.« less
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Factors shaping the evolution of electronic documentation systems
NASA Technical Reports Server (NTRS)
Dede, Christopher J.; Sullivan, Tim R.; Scace, Jacque R.
1990-01-01
The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments.
CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database
Jia, Baofeng; Raphenya, Amogelang R.; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K.; Lago, Briony A.; Dave, Biren M.; Pereira, Sheldon; Sharma, Arjun N.; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E.; Frye, Jonathan G.; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L.; Pawlowski, Andrew C.; Johnson, Timothy A.; Brinkman, Fiona S.L.; Wright, Gerard D.; McArthur, Andrew G.
2017-01-01
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis. PMID:27789705
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Localization with Sparse Acoustic Sensor Network Using UAVs as Information-Seeking Data Mules
2013-05-01
technique to differentiate among several sources. 2.2. AoA Estimation AoA Models. The kth of NAOA AoA sensors produces an angular measurement modeled...squares sense. θ̂ = arg min φ 3∑ i=1 ( ̂τi0 − eTφ ri )2 (9) The minimization was done by gridding the one-dimensional angular space and finding the optimum...Latitude E5500 laptop running FreeBSD and custom Java applications to process and store the raw audio signals. Power Source: The laptop was powered for an
Heat exchanger for fuel cell power plant reformer
Misage, Robert; Scheffler, Glenn W.; Setzer, Herbert J.; Margiott, Paul R.; Parenti, Jr., Edmund K.
1988-01-01
A heat exchanger uses the heat from processed fuel gas from a reformer for a fuel cell to superheat steam, to preheat raw fuel prior to entering the reformer and to heat a water-steam coolant mixture from the fuel cells. The processed fuel gas temperature is thus lowered to a level useful in the fuel cell reaction. The four temperature adjustments are accomplished in a single heat exchanger with only three heat transfer cores. The heat exchanger is preheated by circulating coolant and purge steam from the power section during startup of the latter.
Coexistence Possibility of Biomass Industries
NASA Astrophysics Data System (ADS)
Jingchun, Sun; Junhu, Hou
This research aims to shed light on the mechanism of agricultural biomass material competition between the power generation and straw pulp industries and the impact on their coexistence. A two-stage game model is established to analyze including factors such as unit transportation cost, and profit spaces for the firms. The participants in the competition are a biomass supplier, a power plant and a straw pulp plant. From the industrial economics perspective, our analysis shows that raw material competition will bring about low coexistence possibility of the two industries based on agricultural residues in a circular collection area.
Diesel Fuel Alternatives for Engines in Civil Works Prime Movers.
1984-09-01
2. h5 -21 _____ 2 1 -2 11114 . MICROCOP RESOLUTION Tl HART * US Army Corpsof Engineers INTERIM REPORT E-200 -September 1981 a-t i r Technology to...on Raw Coal--No. 2 Fuel Oil Slurries 36 14 Comparison on Power Basis of Amount of SOX Compounds in Exhaust of a Single-Cylinder 1360-cc Diesel Engine...Fuel by F0o by o wt t wt. 0. 0 116-F Pitb g Co a Coal Coal Figure 14. Comparison on power basis of amount ot SO compounds in XI exhast f asingle
Wave Energy Prize - 1/20th Testing - CalWave Power Technologies
Scharmen, Wesley
2016-09-09
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the CalWave Power Technologies team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
WIND Toolkit Power Data Site Index
Draxl, Caroline; Mathias-Hodge, Bri
2016-10-19
This spreadsheet contains per-site metadata for the WIND Toolkit sites and serves as an index for the raw data hosted on Globus connect (nrel#globus:/globusro/met_data). Aside from the metadata, per site average power and capacity factor are given. This data was prepared by 3TIER under contract by NREL and is public domain. Authoritative documentation on the creation of the underlying dataset is at: Final Report on the Creation of the Wind Integration National Dataset (WIND) Toolkit and API: http://www.nrel.gov/docs/fy16osti/66189.pdf
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, Marcus H.; Brown, James B.
This software implements the first base caller for nanopore data that calls bases directly from raw data. The basecRAWller algorithm has two major advantages over current nanopore base calling software: (1) streaming base calling and (2) base calling from information rich raw signal. The ability to perform truly streaming base calling as signal is received from the sequencer can be very powerful as this is one of the major advantages of this technology as compared to other sequencing technologies. As such enabling as much streaming potential as possible will be incredibly important as this technology continues to become more widelymore » applied in biosciences. All other base callers currently employ the Viterbi algorithm which requires the whole sequence to employ the complete base calling procedure and thus precludes a natural streaming base calling procedure. The other major advantage of the basecRAWller algorithm is the prediction of bases from raw signal which contains much richer information than the segmented chunks that current algorithms employ. This leads to the potential for much more accurate base calls which would make this technology much more valuable to all of the growing user base for this technology.« less
Effect of marination in gravy on the radio frequency and microwave processing properties of beef.
Basaran-Akgul, Nese; Rasco, Barbara A
2015-02-01
Dielectric properties (the dielectric constant (ε') and the dielectric loss factor (ε″)) and the penetration depth of raw eye of round beef Semitendinosus muscle, raw beef marinated in gravy, raw beef cooked in gravy, and gravy alone were determined as a function of the temperature (20-130 °C) and frequency (27-1,800 MHz). Both ε' and ε″ values increased as the temperature increased at low frequencies (27 and 40 MHz). At high frequencies (915 and 1,800 MHz), ε' showed a 50 % decrease while ε″ increased nearly three fold with increasing temperature in the range from 20 to 130 °C. ε' increased gradually while ε″ increased five fold when the temperature increased from 20 to 130 °C. Both ε' and ε″ of all samples decreased with increase in frequency. Marinating the beef in gravy dramatically increased the ε″ values, particularly at the lower frequencies. Power penetration depth of all samples decreased with increase temperature and frequency. These results are expected to provide useful data for modeling dielectric heating processes of marinated muscle food.
Yam, Mun-Li; Abdul Hafid, Sitti Rahma; Cheng, Hwee-Ming; Nesaretnam, Kalanithi
2009-09-01
Tocotrienols are powerful chain breaking antioxidant. Moreover, they are now known to exhibit various non-antioxidant properties such as anti-cancer, neuroprotective and hypocholesterolemic functions. This study was undertaken to investigate the anti-inflammatory effects of tocotrienol-rich fraction (TRF) and individual tocotrienol isoforms namely delta-, gamma-, and alpha-tocotrienol on lipopolysaccharide-stimulated RAW264.7 macrophages. The widely studied vitamin E form, alpha-tocopherol, was used as comparison. Stimulation of RAW264.7 with lipopolysaccharide induced the release of various inflammatory markers. 10 mcirog/ml of TRF and all tocotrienol isoforms significantly inhibited the production of interleukin-6 and nitric oxide. However, only alpha-tocotrienol demonstrated a significant effect in lowering tumor necrosis factor-alpha production. Besides, TRF and all tocotrienol isoforms except gamma-tocotrienol reduced prostaglandin E(2) release. It was accompanied by the down-regulation of cyclooxygenase-2 gene expression by all vitamin E forms except alpha-tocopherol. Collectively, the data suggested that tocotrienols are better anti-inflammatory agents than alpha-tocopherol and the most effective form is delta-tocotrienol.
Leslie, Kate; Sleigh, Jamie; Paech, Michael J; Voss, Logan; Lim, Chiew Woon; Sleigh, Callum
2009-09-01
Dream recall is reportedly more common after propofol than after volatile anesthesia, but this may be due to delayed emergence or more amnesia after longer-acting volatiles. The electroencephalographic signs of dreaming during anesthesia and the differences between propofol and desflurane also are unknown. The authors therefore compared dream recall after propofol- or desflurane-maintained anesthesia and analyzed electroencephalographic patterns in dreamers and nondreamers and in propofol and desflurane patients for similarities to rapid eye movement and non-rapid eye movement sleep. Three hundred patients presenting for noncardiac surgery were randomized to receive propofol- or desflurane-maintained anesthesia. The raw electroencephalogram was recorded from induction until patients were interviewed about dreaming when they became first oriented postoperatively. Using spectral and ordinal methods, the authors quantified the amount of sleep spindle-like activity and high-frequency power in the electroencephalogram. The incidence of dream recall was similar for propofol (27%) and desflurane (28%) patients. Times to interview were similar (median 20 [range 4-114] vs. 17 [7-86] min; P = 0.1029), but bispectral index values at interview were lower (85 [69-98] vs. 92 [40-98]; P < 0.0001) in propofol than in desflurane patients. During surgery, the raw electroencephalogram of propofol patients showed more and faster spindle activity than in desflurane patients (P < 0.001). The raw electroencephalogram of dreamers showed fewer spindles and more high-frequency power than in nondreamers in the 5 min before interview (P < 0.05). Anesthetic-related dreaming seems to occur just before awakening and is associated with a rapid eye movement-like electroencephalographic pattern.
A novel power converter for photovoltaic applications
NASA Astrophysics Data System (ADS)
Yuvarajan, S.; Yu, Dachuan; Xu, Shanguang
A simple and economical power conditioner to convert the power available from solar panels into 60 Hz ac voltage is described. The raw dc voltage from the solar panels is converted to a regulated dc voltage using a boost converter and a large capacitor and the dc output is then converted to 60 Hz ac using a bridge inverter. The ratio between the load current and the short-circuit current of a PV panel at maximum power point is nearly constant for different insolation (light) levels and this property is utilized in designing a simple maximum power point tracking (MPPT) controller. The controller includes a novel arrangement for sensing the short-circuit current without disturbing the operation of the PV panel and implementing MPPT. The switching losses in the inverter are reduced by using snubbers. The results obtained on an experimental converter are presented.
Wireless Acoustic Measurement System
NASA Technical Reports Server (NTRS)
Anderson, Paul D.; Dorland, Wade D.; Jolly, Ronald L.
2007-01-01
A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/ Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in the article on page 8. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro- ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1.3-octave spectrograms.
Wireless Acoustic Measurement System
NASA Technical Reports Server (NTRS)
Anderson, Paul D.; Dorland, Wade D.
2005-01-01
A prototype wireless acoustic measurement system (WAMS) is one of two main subsystems of the Acoustic Prediction/Measurement Tool, which comprises software, acoustic instrumentation, and electronic hardware combined to afford integrated capabilities for predicting and measuring noise emitted by rocket and jet engines. The other main subsystem is described in "Predicting Rocket or Jet Noise in Real Time" (SSC-00215-1), which appears elsewhere in this issue of NASA Tech Briefs. The WAMS includes analog acoustic measurement instrumentation and analog and digital electronic circuitry combined with computer wireless local-area networking to enable (1) measurement of sound-pressure levels at multiple locations in the sound field of an engine under test and (2) recording and processing of the measurement data. At each field location, the measurements are taken by a portable unit, denoted a field station. There are ten field stations, each of which can take two channels of measurements. Each field station is equipped with two instrumentation microphones, a micro-ATX computer, a wireless network adapter, an environmental enclosure, a directional radio antenna, and a battery power supply. The environmental enclosure shields the computer from weather and from extreme acoustically induced vibrations. The power supply is based on a marine-service lead-acid storage battery that has enough capacity to support operation for as long as 10 hours. A desktop computer serves as a control server for the WAMS. The server is connected to a wireless router for communication with the field stations via a wireless local-area network that complies with wireless-network standard 802.11b of the Institute of Electrical and Electronics Engineers. The router and the wireless network adapters are controlled by use of Linux-compatible driver software. The server runs custom Linux software for synchronizing the recording of measurement data in the field stations. The software includes a module that provides an intuitive graphical user interface through which an operator at the control server can control the operations of the field stations for calibration and for recording of measurement data. A test engineer positions and activates the WAMS. The WAMS automatically establishes the wireless network. Next, the engineer performs pretest calibrations. Then the engineer executes the test and measurement procedures. After the test, the raw measurement files are copied and transferred, through the wireless network, to a hard disk in the control server. Subsequently, the data are processed into 1/3-octave spectrograms.
INCREASING EVIDENCE FOR HEMISPHERICAL POWER ASYMMETRY IN THE FIVE-YEAR WMAP DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoftuft, J.; Eriksen, H. K.; Hansen, F. K.
Motivated by the recent results of Hansen et al. concerning a noticeable hemispherical power asymmetry in the Wilkinson Microwave Anisotropy Probe (WMAP) data on small angular scales, we revisit the dipole-modulated signal model introduced by Gordon et al.. This model assumes that the true cosmic microwave background signal consists of a Gaussian isotropic random field modulated by a dipole, and is characterized by an overall modulation amplitude, A, and a preferred direction, p-hat. Previous analyses of this model have been restricted to very low resolution (i.e., 3.{sup 0}6 pixels, a smoothing scale of 9 deg. FWHM, and l {approx}< 40)more » due to computational cost. In this paper, we double the angular resolution (i.e., 1.{sup 0}8 pixels and 4.{sup 0}5 FWHM smoothing scale), and compute the full corresponding posterior distribution for the five-year WMAP data. The results from our analysis are the following: the best-fit modulation amplitude for l {<=} 64 and the ILC data with the WMAP KQ85 sky cut is A = 0.072 {+-} 0.022, nonzero at 3.3{sigma}, and the preferred direction points toward Galactic coordinates (l, b) = (224 deg., - 22 deg.) {+-} 24 deg. The corresponding results for l {approx}< 40 from earlier analyses were A = 0.11 {+-} 0.04 and (l, b) = (225 deg. - 27 deg.). The statistical significance of a nonzero amplitude thus increases from 2.8{sigma} to 3.3{sigma} when increasing l{sub max} from 40 to 64, and all results are consistent to within 1{sigma}. Similarly, the Bayesian log-evidence difference with respect to the isotropic model increases from {delta}ln E = 1.8 to {delta}ln E = 2.6, ranking as 'strong evidence' on the Jeffreys' scale. The raw best-fit log-likelihood difference increases from {delta}ln L = 6.1 to {delta}ln L = 7.3. Similar, and often slightly stronger, results are found for other data combinations. Thus, we find that the evidence for a dipole power distribution in the WMAP data increases with l in the five-year WMAP data set, in agreement with the reports of Hansen et al.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najafi, M; El Kaffas, A; Han, B
Purpose: Clarity Autoscan ultrasound monitoring system allows acquisition of raw radiofrequency (RF) ultrasound data prior and during radiotherapy. This enables the computation of 3D Quantitative Ultrasound (QUS) tissue parametric maps from. We aim to evaluate whether QUS parameters undergo changes with radiotherapy and thus potentially be used as early predictors and/or markers of treatment response in prostate cancer patients. Methods: In-vivo evaluation was performed under IRB protocol to allow data collection in prostate patients treated with VMAT whereby prostate was imaged through the acoustic window of the perineum. QUS spectroscopy analysis was carried out by computing a tissue power spectrummore » normalized to the power spectrum obtained from a quartz to remove system transfer function effects. A ROI was selected within the 3D image volume of the prostate. Because longitudinal registration was optimal, the same features could be used to select ROIs at roughly the same location in images acquired on different days. Parametric maps were generated within the rectangular ROIs with window sizes that were approximately 8 times the wavelength of the ultrasound. The mid-band fit (MBF), spectral slope (SS) and spectral intercept (SI) QUS parameters were computed for each window within the ROI and displayed as parametric maps. Quantitative parameters were obtained by averaging each of the spectral parameters over the whole ROI. Results: Data was acquired for over 21 treatment fractions. Preliminary results show changes in the parametric maps. MBF values decreased from −33.9 dB to −38.7 dB from pre-treatment to the last day of treatment. The spectral slope increased from −1.1 a.u. to −0.5 a.u., and spectral intercept decreased from −28.2 dB to −36.3 dB over the 21 treatment regimen. Conclusion: QUS parametric maps change over the course of treatment which warrants further investigation in their potential use for treatment planning and predicting treatment outcomes. Research was supported by Elekta.« less
Experience with wear-resistant materials at the Homer City Coal Cleaning Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, W.R.
1984-10-01
The Homer City preparation plant is a multi-stream, dual-circuit facility with a capacity of 1200 ton/hour raw feed. It serves 3 units of a neighbouring power station. Experience with a number of wear and corrosion resistant materials is described. It is emphasised that the successes and failures reported may be site-specific.
Shiota, Kenji; Nakamura, Takafumi; Takaoka, Masaki; Aminuddin, Siti Fatimah; Oshita, Kazuyuki; Fujimori, Takashi
2017-11-01
Environmentally sound treatments are required to dispose of municipal solid waste incineration fly ash (MSWIFA) contaminated with radioactive cesium (Cs) from the Fukushima Daiichi nuclear power plant accident in Japan. This study focuses on the stabilization of Cs using an alkali-activated MSWIFA and pyophyllite-based system. Three composite solid products were synthesized after mixtures of raw materials (dehydrated pyrophyllite, MSWIFA, 14 mol/L aqueous sodium hydroxide, and sodium silicate solution) were cured at 105 °C for 24 h. Three types of MSWIFAs were prepared as raw fly ash, raw fly ash with 0.1% CsCl, and raw fly ash with 40% CsCl to understand the stabilization mechanism of Cs. Cs stabilization in two solid products was successful, with less than 6.9% leaching observed from two types tests, and was partly successful for the solid product with the highest concentration of Cs. X-ray diffraction showed that all of the solid products produced several crystalline phases, and that pollucite was formed in the highest Cs concentration product. The X-ray absorption fine structure and scanning electron microscopy with X-ray analysis suggested that most Cs species formed pollucite in the two solid products from MSWIFA with added CsCl. This system provides a technique for the direct stabilization of Cs in MSWIFA. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Power of Many: Nanosatellites For Cost Effective Global Weather Data
NASA Astrophysics Data System (ADS)
Greenberg, A.; Platzer, P.
2015-12-01
While weather processing technology through modeling and simulations has continued to advance, the amount of raw data available for analysis has dwindled. Most raw weather data is collected from satellites that are past their intended decommission date, and the likelihood of a catastrophic failure and diminishing reliability increases with each passing day. A United States government report released this year recognized the potential risk that this creates, citing a few alternatives to our aging satellite technology to at least maintain the level of raw weather data we currently have available. This report also highlighted nanosatellites as one of the most promising solutions, due in no small part to their standard form factor, translating into increased launch capabilities and better resiliency with fewer points of failure, rapidly advancing technology and low capital expenditure. Taking advantage of rapid advancements in sensor technology, these nanosatellites are replaced every two years or less and de-orbit quickly. Each new generation carries an improved payload and offers more network-wide resiliency. A constellation of just ten GPS-RO enabled nanosatellites taking measurements from every point on Earth, coupled with a globally distributed network of ground stations, can provide five times more radio occultation data than the combined efforts of current weather satellites. By the end of this year, Spire Global, Inc. will launch the world's first network of commercial weather satellites using GPS-RO for raw data collection.
ERIC Educational Resources Information Center
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-01-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Verification of the WFAS Lightning Efficiency Map
Paul Sopko; Don Latham; Isaac Grenfell
2007-01-01
A Lightning Ignition Efficiency map was added to the suite of daily maps offered by the Wildland Fire Assessment System (WFAS) in 1999. This map computes a lightning probability of ignition (POI) based on the estimated fuel type, fuel depth, and 100-hour fuel moisture interpolated from the Remote Automated Weather Station (RAWS) network. An attempt to verify the...
1986-07-28
computation the lack of availability of goods, with the aid of an econometric model of the 4-person wage earner household (FRG consumption...substantially reducing the specific consumption of en- ergy, raw materials, and intermediate products per unit of national income . Cooperation conventions...of income , availability of goods, etc. Without a doubt, such differences impair the indicative value of consumer parities. However, the differences
Power Systems Life Cycle Analysis Tool (Power L-CAT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andruski, Joel; Drennen, Thomas E.
2011-01-01
The Power Systems L-CAT is a high-level dynamic model that calculates levelized production costs and tracks environmental performance for a range of electricity generation technologies: natural gas combined cycle (using either imported (LNGCC) or domestic natural gas (NGCC)), integrated gasification combined cycle (IGCC), supercritical pulverized coal (SCPC), existing pulverized coal (EXPC), nuclear, and wind. All of the fossil fuel technologies also include an option for including carbon capture and sequestration technologies (CCS). The model allows for quick sensitivity analysis on key technical and financial assumptions, such as: capital, O&M, and fuel costs; interest rates; construction time; heat rates; taxes; depreciation;more » and capacity factors. The fossil fuel options are based on detailed life cycle analysis reports conducted by the National Energy Technology Laboratory (NETL). For each of these technologies, NETL's detailed LCAs include consideration of five stages associated with energy production: raw material acquisition (RMA), raw material transport (RMT), energy conversion facility (ECF), product transportation and distribution (PT&D), and end user electricity consumption. The goal of the NETL studies is to compare existing and future fossil fuel technology options using a cradle-to-grave analysis. The NETL reports consider constant dollar levelized cost of delivered electricity, total plant costs, greenhouse gas emissions, criteria air pollutants, mercury (Hg) and ammonia (NH3) emissions, water withdrawal and consumption, and land use (acreage).« less
On the Power Spectrum of Motor Unit Action Potential Trains Synchronized With Mechanical Vibration.
Romano, Maria; Fratini, Antonio; Gargiulo, Gaetano D; Cesarelli, Mario; Iuppariello, Luigi; Bifulco, Paolo
2018-03-01
This study provides a definitive analysis of the spectrum of a motor unit action potential train (MUAPT) elicited by mechanical vibratory stimulation via a detailed and concise mathematical formulation. Experimental studies demonstrated that MUAPs are not exactly synchronized with the vibratory stimulus but show a variable latency jitter, whose effects have not been investigated yet. Synchronized action potential train was represented as a quasi-periodic sequence of a given MU waveform. The latency jitter of action potentials was modeled as a Gaussian stochastic process, in accordance to the previous experimental studies. A mathematical expression for power spectrum of a synchronized MUAPT has been derived. The spectrum comprises a significant continuous component and discrete components at the vibratory frequency and its harmonics. Their relevance is correlated to the level of synchronization: the weaker the synchronization the more relevant is the continuous spectrum. Electromyography (EMG) rectification enhances the discrete components. The derived equations have general validity and well describe the power spectrum of actual EMG recordings during vibratory stimulation. Results are obtained by appropriately setting the level of synchronization and vibration frequency. This paper definitively clarifies the nature of changes in spectrum of raw EMG recordings from muscles undergoing vibratory stimulation. Results confirm the need of motion artifact filtering for raw EMG recordings during stimulation and strongly suggest to avoid EMG rectification that significantly alters the spectrum characteristics.
A satellite-based radar wind sensor
NASA Technical Reports Server (NTRS)
Xin, Weizhuang
1991-01-01
The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.
Ingham, Steven C; Fanslau, Melody A; Burnham, Greg M; Ingham, Barbara H; Norback, John P; Schaffner, Donald W
2007-06-01
A computer-based tool (available at: www.wisc.edu/foodsafety/meatresearch) was developed for predicting pathogen growth in raw pork, beef, and poultry meat. The tool, THERM (temperature history evaluation for raw meats), predicts the growth of pathogens in pork and beef (Escherichia coli O157:H7, Salmonella serovars, and Staphylococcus aureus) and on poultry (Salmonella serovars and S. aureus) during short-term temperature abuse. The model was developed as follows: 25-g samples of raw ground pork, beef, and turkey were inoculated with a five-strain cocktail of the target pathogen(s) and held at isothermal temperatures from 10 to 43.3 degrees C. Log CFU per sample data were obtained for each pathogen and used to determine lag-phase duration (LPD) and growth rate (GR) by DMFit software. The LPD and GR were used to develop the THERM predictive tool, into which chronological time and temperature data for raw meat processing and storage are entered. The THERM tool then predicts a delta log CFU value for the desired pathogen-product combination. The accuracy of THERM was tested in 20 different inoculation experiments that involved multiple products (coarse-ground beef, skinless chicken breast meat, turkey scapula meat, and ground turkey) and temperature-abuse scenarios. With the time-temperature data from each experiment, THERM accurately predicted the pathogen growth and no growth (with growth defined as delta log CFU > 0.3) in 67, 85, and 95% of the experiments with E. coli 0157:H7, Salmonella serovars, and S. aureus, respectively, and yielded fail-safe predictions in the remaining experiments. We conclude that THERM is a useful tool for qualitatively predicting pathogen behavior (growth and no growth) in raw meats. Potential applications include evaluating process deviations and critical limits under the HACCP (hazard analysis critical control point) system.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
47 CFR 15.102 - CPU boards and power supplies used in personal computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...
Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel
1973-04-21
The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, H.P.
1980-03-01
Performance tests using an 11 kW single cylinder diesel engine were made to determine the effects of three different micronized coal-fuel oil slurries being considered as alternative fuels. Slurries containing 20, 32, and 40%-wt micronized raw coal in No. 2 fuel oil were used. Results are presented indicating the changes in the concentrations of SO/sub X/ and NO/sub X/ in the exhaust, exhaust opacity, power and efficiency, and in wear rates relative to operation on fuel oil No. 2. The engine was operated for 10 h at full load and 1400 rpm on al fuels except the 40%-wt slurry. Thismore » test was discontinued because of extremely poor performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, H.P.
1980-03-01
Performance tests using an 11 kw single cylinder diesel engine were made to determine the effects of three different micronized coal-fuel oil slurries being considered as alternative fuels. Slurries containing 20, 32, and 40 percent by weight micronized raw coal in No. 2 fuel oil were used. Results are presented indicating the changes in the concentrations of SO/sub X/ and NO/sub X/ in the exhaust, exhaust opacity, power and efficiency, and in wear rates relative to operation on fuel oil No. 2. The engine was operated for 10 hrs at full load and 1400 rpm on all fuels except themore » 40% by weight slurry. This test was discontinued because of extremely poor performance.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
..., ``Configuration Management Plans for Digital Computer Software used in Safety Systems of Nuclear Power Plants... Digital Computer Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory..., Reviews, and Audits for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This...
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
Performance of MCNP4A on seven computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, J.S.; Brockhoff, R.C.
1994-12-31
The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
1975-08-11
Desulfurization of flue gases from electric power plants Arthur J. Coyle Walter E. Chapin John B. Day John T. Herridge Victor Levin James...45 High-Temperature Gas -Turbine Engines for Automotive Applications 60 Fuel Cel13 76 Lasers for Communications and Materials Processing 97...Relationship for a Regenerative Gas -Turbine Engine 61 Relative Raw Materials Cost 61 Proposed Milestone Chart ERDA/AAPS Ceramic Mate- rials and
Analysis of Covariance: Is It the Appropriate Model to Study Change?
ERIC Educational Resources Information Center
Marston, Paul T., Borich, Gary D.
The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…
40 CFR 91.419 - Raw emission sampling calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... a test [g/kW-hr]. Wi = Average mass flow rate (WHC, WCO, WNOx) of an emission from the test engine during mode i, [g/hr]. fi = Weighting factors for each mode according to § 91.410(a) Pi = Average power... brake-specific fuel consumption in grams of fuel per kilowatt-hour (g/kW-hr). Fi = Fuel mass flow rate...
40 CFR 91.419 - Raw emission sampling calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... a test [g/kW-hr]. Wi = Average mass flow rate (WHC, WCO, WNOx) of an emission from the test engine during mode i, [g/hr]. fi = Weighting factors for each mode according to § 91.410(a) Pi = Average power... brake-specific fuel consumption in grams of fuel per kilowatt-hour (g/kW-hr). Fi = Fuel mass flow rate...
40 CFR 91.419 - Raw emission sampling calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a test [g/kW-hr]. Wi = Average mass flow rate (WHC, WCO, WNOx) of an emission from the test engine during mode i, [g/hr]. fi = Weighting factors for each mode according to § 91.410(a) Pi = Average power... brake-specific fuel consumption in grams of fuel per kilowatt-hour (g/kW-hr). Fi = Fuel mass flow rate...
40 CFR 91.419 - Raw emission sampling calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... a test [g/kW-hr]. Wi = Average mass flow rate (WHC, WCO, WNOx) of an emission from the test engine during mode i, [g/hr]. fi = Weighting factors for each mode according to § 91.410(a) Pi = Average power... brake-specific fuel consumption in grams of fuel per kilowatt-hour (g/kW-hr). Fi = Fuel mass flow rate...
40 CFR 91.419 - Raw emission sampling calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... a test [g/kW-hr]. Wi = Average mass flow rate (WHC, WCO, WNOx) of an emission from the test engine during mode i, [g/hr]. fi = Weighting factors for each mode according to § 91.410(a) Pi = Average power... brake-specific fuel consumption in grams of fuel per kilowatt-hour (g/kW-hr). Fi = Fuel mass flow rate...
Laser treatment of white China surface
NASA Astrophysics Data System (ADS)
Osvay, K.; Képíró, I.; Berkesi, O.
2006-04-01
The surface of gloss fired porcelain with and without raw glaze coating was radiated by a CO 2 laser working at 10.6 μm, a choice resulted from spectroscopic studies of suspensions made of China. The shine of the untreated sample was defined as the distribution of micro-droplets on the surface. The surface alterations due to laser heating were classified by the diameter of the completely melted surface, the ring of the surface at the threshold of melting, and the size of microscopic cracks. The diameter of the laser treated area was in the range of 3 mm, while the incident laser power and the duration of laser heating were varied between 1 and 10 W and 1-8 min, respectively. The different stages of surface modifications were attributed primarily to the irradiating laser power and proved to be rather insensitive to the duration of the treatment. We have found a range of parameters under which the white China surface coated with raw glaze and followed by laser induced melting exhibited very similar characteristics to the untreated porcelain. This technique seems prosperous for laser assisted reparation of small surface defects of unique China samples after the firing process.
Mechanism of enhanced performance on a hybrid direct carbon fuel cell using sawdust biofuels
NASA Astrophysics Data System (ADS)
Li, Shuangbin; Jiang, Cairong; Liu, Juan; Tao, Haoliang; Meng, Xie; Connor, Paul; Hui, Jianing; Wang, Shaorong; Ma, Jianjun; Irvine, John T. S.
2018-04-01
Biomass is expected to play a significant role in power generation in the near future. With the uprising of carbon fuel cells, hybrid direct carbon fuel cells (HDCFCs) show its intrinsic and incomparable advantages in the generation of clean energy with higher efficiency. In this study, two types of biomass treated by physical sieve and pyrolysis from raw sawdust are investigated on an anode-supported HDCFC. The structure and thermal analysis indicate that raw sawdust has well-formed cellulose I phase with very low ash. Electrochemical performance behaviors for sieved and pyrolyzed sawdust combined with various weight ratios of carbonate are compared in N2 and CO2 purge gas. The results show that the power output of sieved sawdust with 789 mWcm-2 is superior to that of pyrolyzed sawdust in CO2 flowing, as well as in N2 flowing. The anode reaction mechanism for the discrepancy of two fuels is explained and the emphasis is also placed on the modified oxygen-reduction cycle mechanism of catalytic effects of Li2CO3 and K2CO3 salts in promoting cell performance.
Afshari, Kasra; Samavati, Vahid; Shahidi, Seyed-Ahmad
2015-03-01
The effects of ultrasonic power, extraction time, extraction temperature, and the water-to-raw material ratio on extraction yield of crude polysaccharide from the leaf of Hibiscus rosa-sinensis (HRLP) were optimized by statistical analysis using response surface methodology. The response surface methodology (RSM) was used to optimize HRLP extraction yield by implementing the Box-Behnken design (BBD). The experimental data obtained were fitted to a second-order polynomial equation using multiple regression analysis and also analyzed by appropriate statistical methods (ANOVA). Analysis of the results showed that the linear and quadratic terms of these four variables had significant effects. The optimal conditions for the highest extraction yield of HRLP were: ultrasonic power, 93.59 W; extraction time, 25.71 min; extraction temperature, 93.18°C; and the water to raw material ratio, 24.3 mL/g. Under these conditions, the experimental yield was 9.66±0.18%, which is well in close agreement with the value predicted by the model 9.526%. The results demonstrated that HRLP had strong scavenging activities in vitro on DPPH and hydroxyl radicals. Copyright © 2014 Elsevier B.V. All rights reserved.
STUB - a manufacturing system for producing rough dimension cuttings from low-grade lumber
Edwin L. Lucas; Charles J. Gatchell
1976-01-01
A rough mill manufacturing system for producing high-value furniture parts from low-value raw material is described. Called STUB (Short Temporarily Upgraded Boards), the system is designed to convert low-grade hardwood lumber into rough dimension parts. Computer simulation trials showed that more than one-third of the volume of parts produced from No. 2 Common oak...
Code of Federal Regulations, 2012 CFR
2012-04-01
..., create a material distortion of income. If the Commissioner determines that the taxpayer's grouping is...-programmable, interactive cathode ray tube computer terminals that vary in price. These terminals all interact... section 954(d)(1)(A)) of a bulk pharmaceutical in Puerto Rico from raw materials. S sold the bulk...
Code of Federal Regulations, 2013 CFR
2013-04-01
..., create a material distortion of income. If the Commissioner determines that the taxpayer's grouping is...-programmable, interactive cathode ray tube computer terminals that vary in price. These terminals all interact... section 954(d)(1)(A)) of a bulk pharmaceutical in Puerto Rico from raw materials. S sold the bulk...
Code of Federal Regulations, 2014 CFR
2014-04-01
..., create a material distortion of income. If the Commissioner determines that the taxpayer's grouping is...-programmable, interactive cathode ray tube computer terminals that vary in price. These terminals all interact... section 954(d)(1)(A)) of a bulk pharmaceutical in Puerto Rico from raw materials. S sold the bulk...
Code of Federal Regulations, 2011 CFR
2011-04-01
..., create a material distortion of income. If the Commissioner determines that the taxpayer's grouping is...-programmable, interactive cathode ray tube computer terminals that vary in price. These terminals all interact... section 954(d)(1)(A)) of a bulk pharmaceutical in Puerto Rico from raw materials. S sold the bulk...
3-Year-Olds' Perseveration on the DCCS Explained: A Meta-Analysis
ERIC Educational Resources Information Center
Landry, Oriane; Al-Taie, Shems; Franklin, Ari
2017-01-01
The Dimensional Change Card Sort (DCCS) task is a widely used measure of preschoolers' executive function. We combined data for 3,290 3-year-olds from 37 unique studies reporting 130 experimental conditions. Using raw pass/fail counts, we computed the pass rates and chi-squared value for each against chance (50/50) performance. We grouped data…
AVE/VAS 4: 25-mb sounding data
NASA Technical Reports Server (NTRS)
Sienkiewicz, M. E.
1983-01-01
The rawinsonde sounding program is described and tabulated data at 25 mb intervals for the 24 stations and 14 special stations participating in the experiment is presented. Sounding were taken at 3 hr intervals. An additional sounding was taken at the normal synoptic observation time. Some soundings were computed from raw ordinate data, while others were interpolated from significant level data.
Representation of Serendipitous Scientific Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A computer program defines and implements an innovative kind of data structure than can be used for representing information derived from serendipitous discoveries made via collection of scientific data on long exploratory spacecraft missions. Data structures capable of collecting any kind of data can easily be implemented in advance, but the task of designing a fixed and efficient data structure suitable for processing raw data into useful information and taking advantage of serendipitous scientific discovery is becoming increasingly difficult as missions go deeper into space. The present software eases the task by enabling definition of arbitrarily complex data structures that can adapt at run time as raw data are transformed into other types of information. This software runs on a variety of computers, and can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware. It has no specific memory requirements and depends upon the other software with which it is used. This program is implemented as a library that is called by, and becomes folded into, the other software with which it is used.
Doppler lidar signal and turbulence study
NASA Technical Reports Server (NTRS)
Frost, W.; Huang, K. H.; Fitzjarrald, D. F.
1983-01-01
Comparison of the second moments of the Doppler lidar signal with aircraft and tower measured parameters is being carried out. Lidar binary data tapes were successfully converted to ASCII Code on the VAX 11/780. These data were used to develop the computer programs for analyzing data from the Marshall Space Flight Center field test. Raw lidar amplitude along the first 50 forward and backward beams of Run No. 2, respectively was plotted. Plotting techniques for the same beams except with the amplitude thresholded and range corrected were developed. Plotting routines for the corresponding lidar width of the first 50 forward and backward beams were also established. The relationship between raw lidar amplitude and lidar width was examined. The lidar width is roughly constant for lidar amplitudes less than 120 dB. A field test with the NASA/MSFC ground based Doppler lidar, the instrumented NASA B-57B gust gradient aircraft, and the NASA/MSFC eight tower array was carried out. The data tape for the lidar was received and read. The aircraft data and tower data are being digitized and converted to engineering units. Velocities computed sequentially along each of the lidar beams beginning at 16:40:00, May 12, 1983 were plotted for Run No. 1.
Halftoning processing on a JPEG-compressed image
NASA Astrophysics Data System (ADS)
Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent
2003-12-01
Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.
NASA Astrophysics Data System (ADS)
Varandas, António J. C.
2018-04-01
Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.
Bounds on the power of proofs and advice in general physical theories.
Lee, Ciarán M; Hoban, Matty J
2016-06-01
Quantum theory presents us with the tools for computational and communication advantages over classical theory. One approach to uncovering the source of these advantages is to determine how computation and communication power vary as quantum theory is replaced by other operationally defined theories from a broad framework of such theories. Such investigations may reveal some of the key physical features required for powerful computation and communication. In this paper, we investigate how simple physical principles bound the power of two different computational paradigms which combine computation and communication in a non-trivial fashion: computation with advice and interactive proof systems. We show that the existence of non-trivial dynamics in a theory implies a bound on the power of computation with advice. Moreover, we provide an explicit example of a theory with no non-trivial dynamics in which the power of computation with advice is unbounded. Finally, we show that the power of simple interactive proof systems in theories where local measurements suffice for tomography is non-trivially bounded. This result provides a proof that [Formula: see text] is contained in [Formula: see text], which does not make use of any uniquely quantum structure-such as the fact that observables correspond to self-adjoint operators-and thus may be of independent interest.
Global Value Chain and Manufacturing Analysis on Geothermal Power Plant Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akar, Sertac; Augustine, Chad; Kurup, Parthiv
In this study, we have undertaken a robust analysis of the global supply chain and manufacturing costs for components of Organic Rankine Cycle (ORC) Turboexpander and steam turbines used in geothermal power plants. We collected a range of market data influencing manufacturing from various data sources and determined the main international manufacturers in the industry. The data includes the manufacturing cost model to identify requirements for equipment, facilities, raw materials, and labor. We analyzed three different cases; 1) 1 MW geothermal ORC Turboexpander 2) 5 MW ORC Turboexpander 3) 20 MW geothermal Steam Turbine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruwart, T M; Eldel, A
2000-01-01
The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); Onemore » 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the switched configurations. After this testing was completed, the Ciprico arrays were formatted with an XFS file system and performance numbers were gathered to establish a File System Performance Baseline. Finally, the disks were formatted with CXFS and further tests were run to demonstrate the performance of the CXFS file system. A summary of the results of these tests is given.« less
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)
1998-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.
NASA Technical Reports Server (NTRS)
Trejo, L. J.; Shensa, M. J.
1999-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.
Martínez-Preciado, A H; Estrada-Girón, Y; González-Álvarez, A; Fernández, V V A; Macías, E R; Soltero, J F A
2014-09-01
Proximate, thermal, morphological and rheological properties of canned "negro Querétaro" bean pastes, as a function of fat content (0, 2 and 3 %) and temperature (60, 70 and 85 °C), were evaluated. Raw and precooked bean pastes were characterized by scanning electron microscopy (SEM) and differential scanning calorimetry (DSC). Well-defined starch granules in the raw bean pastes were observed, whereas a gelatinized starch paste was observed for the canned bean pastes. The DSC analysis showed that the raw bean pastes had lower onset peak temperatures (79 °C, 79.1 °C) and gelatinization enthalpy (1.940 J/g), compared to that precooked bean pastes (70.4 °C, 75.7 °C and 1.314 J/g, respectively) thermal characteristics. Moreover, the dynamic rheological results showed a gel-like behavior for the canned bean pastes, where the storage modulus (G') was frequency independent and was higher than the loss modulus (G″). The non-linear rheological results exhibited a shear-thinning flow behavior, where the steady shear-viscosity was temperature and fat content dependent. For canned bean pastes, the shear-viscosity data followed a power law equation, where the power law index (n) decreased when the temperature and the fat content increased. The temperature effect on the shear-viscosity was described by an Arrhenius equation, where the activation energy (Ea) was in the range from 19.04 to 36.81 KJ/mol. This rheological behavior was caused by gelatinization of the starch during the cooking and sterilization processes, where starch-lipids and starch-proteins complex were formed.
A Discrete Global Grid System Programming Language Using MapReduce
NASA Astrophysics Data System (ADS)
Peterson, P.; Shatz, I.
2016-12-01
A discrete global grid system (DGGS) is a powerful mechanism for storing and integrating geospatial information. As a "pixelization" of the Earth, many image processing techniques lend themselves to the transformation of data values referenced to the DGGS cells. It has been shown that image algebra, as an example, and advanced algebra, like Fast Fourier Transformation, can be used on the DGGS tiling structure for geoprocessing and spatial analysis. MapReduce has been shown to provide advantages for processing and generating large data sets within distributed and parallel computing. The DGGS structure is ideally suited for big distributed Earth data. We proposed that basic expressions could be created to form the atoms of a generalized DGGS language using the MapReduce programming model. We created three very efficient expressions: Selectors (aka filter) - A selection function that generate a set of cells, cell collections, or geometries; Calculators (aka map) - A computational function (including quantization of raw measurements and data sources) that generate values in a DGGS cell; and Aggregators (aka reduce) - A function that generate spatial statistics from cell values within a cell. We found that these three basic MapReduce operations along with a forth function, the Iterator, for horizontal and vertical traversing of any DGGS structure, provided simple building block resulting in very efficient operations and processes that could be used with any DGGS. We provide examples and a demonstration of their effectiveness using the ISEA3H DGGS on the PYXIS Studio.
Bi-Fi: an embedded sensor/system architecture for REMOTE biological monitoring.
Farshchi, Shahin; Pesterev, Aleksey; Nuyujukian, Paul H; Mody, Istvan; Judy, Jack W
2007-11-01
Wireless-enabled processor modules intended for communicating low-frequency phenomena (i.e., temperature, humidity, and ambient light) have been enabled to acquire and transmit multiple biological signals in real time, which has been achieved by using computationally efficient data acquisition, filtering, and compression algorithms, and interfacing the modules with biological interface hardware. The sensor modules can acquire and transmit raw biological signals at a rate of 32 kb/s, which is near the hardware limit of the modules. Furthermore, onboard signal processing enables one channel, sampled at a rate of 4000 samples/s at 12-bit resolution, to be compressed via adaptive differential-pulse-code modulation (ADPCM) and transmitted in real time. In addition, the sensors can be configured to filter and transmit individual time-referenced "spike" waveforms, or to transmit the spike height and width for alleviating network traffic and increasing battery life. The system is capable of acquiring eight channels of analog signals as well as data via an asynchronous serial connection. A back-end server archives the biological data received via networked gateway sensors, and hosts them to a client application that enables users to browse recorded data. The system also acquires, filters, and transmits oxygen saturation and pulse rate via a commercial-off-the-shelf interface board. The system architecture can be configured for performing real-time nonobtrusive biological monitoring of humans or rodents. This paper demonstrates that low-power, computational, and bandwidth-constrained wireless-enabled platforms can indeed be leveraged for wireless biosignal monitoring.
High Power Particle Beams and Pulsed Power for Industrial Applications
NASA Astrophysics Data System (ADS)
Bluhm, Hansjoachim; An, Wladimir; Engelko, Wladimir; Giese, Harald; Frey, Wolfgang; Heinzel, Annette; Hoppé, Peter; Mueller, Georg; Schultheiss, Christoph; Singer, Josef; Strässner, Ralf; Strauß, Dirk; Weisenburger, Alfons; Zimmermann, Fritz
2002-12-01
Several industrial scale projects with economic and ecologic potential are presently emanating from research and development in the fields of high power particle beams and pulsed power in Europe. Material surface modifications with large area pulsed electron beams are used to protect high temperature gas turbine blades and steel structures in Pb/Bi cooled accelerator driven nuclear reactor systems against oxidation and corrosion respectively. Channel spark electron beams are applied to deposit bio-compatible or bio-active layers on medical implants. Cell membranes are perforated with strong pulsed electric fields to extract nutritive substances or raw materials from the cells and to kill bacteria for sterilization of liquids. Eletrodynamic fragmentation devices are developed to reutilize concrete aggregates for the production of high quality secondary concrete. All activities have a large potential to contribute to a more sustainable economy.
Energy Efficiency Challenges of 5G Small Cell Networks.
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-05-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks.
Energy Efficiency Challenges of 5G Small Cell Networks
Ge, Xiaohu; Yang, Jing; Gharavi, Hamid; Sun, Yang
2017-01-01
The deployment of a large number of small cells poses new challenges to energy efficiency, which has often been ignored in fifth generation (5G) cellular networks. While massive multiple-input multiple outputs (MIMO) will reduce the transmission power at the expense of higher computational cost, the question remains as to which computation or transmission power is more important in the energy efficiency of 5G small cell networks. Thus, the main objective in this paper is to investigate the computation power based on the Landauer principle. Simulation results reveal that more than 50% of the energy is consumed by the computation power at 5G small cell base stations (BSs). Moreover, the computation power of 5G small cell BS can approach 800 watt when the massive MIMO (e.g., 128 antennas) is deployed to transmit high volume traffic. This clearly indicates that computation power optimization can play a major role in the energy efficiency of small cell networks. PMID:28757670
Rapid Analysis of Mass Distribution of Radiation Shielding
NASA Technical Reports Server (NTRS)
Zapp, Edward
2007-01-01
Radiation Shielding Evaluation Toolset (RADSET) is a computer program that rapidly calculates the spatial distribution of mass of an arbitrary structure for use in ray-tracing analysis of the radiation-shielding properties of the structure. RADSET was written to be used in conjunction with unmodified commercial computer-aided design (CAD) software that provides access to data on the structure and generates selected three-dimensional-appearing views of the structure. RADSET obtains raw geometric, material, and mass data on the structure from the CAD software. From these data, RADSET calculates the distribution(s) of the masses of specific materials about any user-specified point(s). The results of these mass-distribution calculations are imported back into the CAD computing environment, wherein the radiation-shielding calculations are performed.
Affective assessment of computer users based on processing the pupil diameter signal.
Ren, Peng; Barreto, Armando; Gao, Ying; Adjouadi, Malek
2011-01-01
Detecting affective changes of computer users is a current challenge in human-computer interaction which is being addressed with the help of biomedical engineering concepts. This article presents a new approach to recognize the affective state ("relaxation" vs. "stress") of a computer user from analysis of his/her pupil diameter variations caused by sympathetic activation. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features are extracted from the preprocessed PD signal for the affective state classification. Finally, a random tree classifier is implemented, achieving an accuracy of 86.78%. In these experiments the Eye Blink Frequency (EBF), is also recorded and used for affective state classification, but the results show that the PD is a more promising physiological signal for affective assessment.
NASA Technical Reports Server (NTRS)
Dunne, Matthew J.
2011-01-01
The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.
Wenzel, J; Fuentes, L; Cabezas, A; Etchebehere, C
2017-06-01
An important pollutant produced during the cheese making process is cheese whey which is a liquid by-product with high content of organic matter, composed mainly by lactose and proteins. Hydrogen can be produced from cheese whey by dark fermentation but, organic matter is not completely removed producing an effluent rich in volatile fatty acids. Here we demonstrate that this effluent can be further used to produce energy in microbial fuel cells. Moreover, current production was not feasible when using raw cheese whey directly to feed the microbial fuel cell. A maximal power density of 439 mW/m 2 was obtained from the reactor effluent which was 1000 times more than when using raw cheese whey as substrate. 16S rRNA gene amplicon sequencing showed that potential electroactive populations (Geobacter, Pseudomonas and Thauera) were enriched on anodes of MFCs fed with reactor effluent while fermentative populations (Clostridium and Lactobacillus) were predominant on the MFC anode fed directly with raw cheese whey. This result was further demonstrated using culture techniques. A total of 45 strains were isolated belonging to 10 different genera including known electrogenic populations like Geobacter (in MFC with reactor effluent) and known fermentative populations like Lactobacillus (in MFC with cheese whey). Our results show that microbial fuel cells are an attractive technology to gain extra energy from cheese whey as a second stage process during raw cheese whey treatment by dark fermentation process.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.
Total recognition discriminability in Huntington's and Alzheimer's disease.
Graves, Lisa V; Holden, Heather M; Delano-Wood, Lisa; Bondi, Mark W; Woods, Steven Paul; Corey-Bloom, Jody; Salmon, David P; Delis, Dean C; Gilbert, Paul E
2017-03-01
Both the original and second editions of the California Verbal Learning Test (CVLT) provide an index of total recognition discriminability (TRD) but respectively utilize nonparametric and parametric formulas to compute the index. However, the degree to which population differences in TRD may vary across applications of these nonparametric and parametric formulas has not been explored. We evaluated individuals with Huntington's disease (HD), individuals with Alzheimer's disease (AD), healthy middle-aged adults, and healthy older adults who were administered the CVLT-II. Yes/no recognition memory indices were generated, including raw nonparametric TRD scores (as used in CVLT-I) and raw and standardized parametric TRD scores (as used in CVLT-II), as well as false positive (FP) rates. Overall, the patient groups had significantly lower TRD scores than their comparison groups. The application of nonparametric and parametric formulas resulted in comparable effect sizes for all group comparisons on raw TRD scores. Relative to the HD group, the AD group showed comparable standardized parametric TRD scores (despite lower raw nonparametric and parametric TRD scores), whereas the previous CVLT literature has shown that standardized TRD scores are lower in AD than in HD. Possible explanations for the similarity in standardized parametric TRD scores in the HD and AD groups in the present study are discussed, with an emphasis on the importance of evaluating TRD scores in the context of other indices such as FP rates in an effort to fully capture recognition memory function using the CVLT-II.
Development and testing of tip devices for horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Gyatt, G. W.; Lissaman, P. B. S.
1985-01-01
A theoretical and field experimental program has been carried out to investigate the use of tip devices on horizontal axis wind turbine rotors. The objective was to improve performance by the reduction of tip losses. While power output can always be increased by a simple radial tip extension, such a modification also results in an increased gale load both because of the extra projected area and longer moment arm. Tip devices have the potential to increase power output without such a structural penalty. A vortex lattice computer model was used to optimize three basic tip configuration types for a 25 kW stall limited commercial wind turbine. The types were a change in tip planform, and a single-element and double-element nonplanar tip extension (winglets). A complete data acquisition system was developed which recorded three wind speed components, ambient pressure, temperature, and turbine output. The system operated unattended and could perform real-time processing of the data, displaying the measured power curve as data accumulated in either a bin sort mode or polynomial curve fit. Approximately 270 hr of perormance data were collected over a three-month period. The sampling interval was 2.4 sec; thrus over 400,000 raw data points were logged. Results for each of the three new tip devices, compared with the original tip, showed a small decrease (of the order of 1 kW) in power output over the measured range of wind speeds from cut-in at about 4 m/s to over 20 m/s, well into the stall limiting region. Changes in orientation and angle-of-attack of the winglets were not made. For aircraft wing tip devices, favorable tip shapes have been reported and it is likely that the tip devices tested in this program did not improve rotor performance because they were not optimally adjusted.
A secure distributed logistic regression protocol for the detection of rare adverse drug events
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-01-01
Background There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. Objective To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. Methods We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. Results The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. Conclusion The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models. PMID:22871397
A secure distributed logistic regression protocol for the detection of rare adverse drug events.
El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat
2013-05-01
There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through generalized estimating equations, and to accommodate other link functions by extending it to generalized linear models.
Beam and Plasma Physics Research
1990-06-01
La di~raDy in high power microwave computations and thi-ory and high energy plasma computations and theory. The HPM computations concentrated on...2.1 REPORT INDEX 7 2.2 TASK AREA 2: HIGH-POWER RF EMISSION AND CHARGED- PARTICLE BEAM PHYSICS COMPUTATION , MODELING AND THEORY 10 2.2.1 Subtask 02-01...Vulnerability of Space Assets 22 2.2.6 Subtask 02-06, Microwave Computer Program Enhancements 22 2.2.7 Subtask 02-07, High-Power Microwave Transvertron Design 23
3-D Electromagnetic field analysis of wireless power transfer system using K computer
NASA Astrophysics Data System (ADS)
Kawase, Yoshihiro; Yamaguchi, Tadashi; Murashita, Masaya; Tsukada, Shota; Ota, Tomohiro; Yamamoto, Takeshi
2018-05-01
We analyze the electromagnetic field of a wireless power transfer system using the 3-D parallel finite element method on K computer, which is a super computer in Japan. It is clarified that the electromagnetic field of the wireless power transfer system can be analyzed in a practical time using the parallel computation on K computer, moreover, the accuracy of the loss calculation becomes better as the mesh division of the shield becomes fine.
Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction
Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang
2016-01-01
Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638
Liu, Chao; Gu, Jinwei
2014-01-01
Classifying raw, unpainted materials--metal, plastic, ceramic, fabric, and so on--is an important yet challenging task for computer vision. Previous works measure subsets of surface spectral reflectance as features for classification. However, acquiring the full spectral reflectance is time consuming and error-prone. In this paper, we propose to use coded illumination to directly measure discriminative features for material classification. Optimal illumination patterns--which we call "discriminative illumination"--are learned from training samples, after projecting to which the spectral reflectance of different materials are maximally separated. This projection is automatically realized by the integration of incident light for surface reflection. While a single discriminative illumination is capable of linear, two-class classification, we show that multiple discriminative illuminations can be used for nonlinear and multiclass classification. We also show theoretically that the proposed method has higher signal-to-noise ratio than previous methods due to light multiplexing. Finally, we construct an LED-based multispectral dome and use the discriminative illumination method for classifying a variety of raw materials, including metal (aluminum, alloy, steel, stainless steel, brass, and copper), plastic, ceramic, fabric, and wood. Experimental results demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Hewitt, Corey A.; Montgomery, David S.; Barbalace, Ryan L.; Carlson, Rowland D.; Carroll, David L.
2014-05-01
By appropriately selecting the carbon nanotube type and n-type dopant for the conduction layers in a multilayered carbon nanotube composite, the total device thermoelectric power output can be increased significantly. The particular materials chosen in this study were raw single walled carbon nanotubes for the p-type layers and polyethylenimine doped single walled carbon nanotubes for the n-type layers. The combination of these two conduction layers leads to a single thermocouple Seebeck coefficient of 96 ± 4 μVK-1, which is 6.3 times higher than that previously reported. This improved Seebeck coefficient leads to a total power output of 14.7 nW per thermocouple at the maximum temperature difference of 50 K, which is 44 times the power output per thermocouple for the previously reported results. Ultimately, these thermoelectric power output improvements help to increase the potential use of these lightweight, flexible, and durable organic multilayered carbon nanotube based thermoelectric modules in low powered electronics applications, where waste heat is available.
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
Experimental Study of Thermophysical Properties of Peat Fuel
NASA Astrophysics Data System (ADS)
Mikhailov, A. S.; Piralishvili, Sh. A.; Stepanov, E. G.; Spesivtseva, N. S.
2017-03-01
A study has been made of thermophysical properties of peat pellets of higher-than-average reactivity due to the pretreatment of the raw material. A synchronous differential analysis of the produced pellets was performed to determine the gaseous products of their decomposition by the mass-spectroscopy method. The parameters of the mass loss rate, the heat-release function, the activation energy, the rate constant of the combustion reaction, and the volatile yield were compared to the properties of pellets compressed by the traditional method on a matrix pelletizer. It has been determined that as a result of the peat pretreatment, the yield of volatile components increases and the activation energy of the combustion reaction decreases by 17 and 30% respectively compared with the raw fuel. This determines its prospects for burning in an atomized state at coal-fired thermal electric power plants.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
Ghisolfi, Verônica; Diniz Chaves, Gisele de Lorena; Ribeiro Siman, Renato; Xavier, Lúcia Helena
2017-02-01
The structure of reverse logistics for waste electrical and electronic equipment (WEEE) is essential to minimize the impacts of their improper disposal. In this context, the Brazilian Solid Waste Policy (BSWP) was a regulatory milestone in Brazil, submitting WEEE to the mandatory implementation of reverse logistics systems, involving the integration of waste pickers on the shared responsibility for the life cycle of products. This article aims to measure the impact of such legal incentives and the bargaining power obtained by the volume of collected waste on the effective formalization of waste pickers. The proposed model evaluates the sustainability of supply chains in terms of the use of raw materials due to disposal fees, collection, recycling and return of some materials from desktops and laptops using system dynamics methodology. The results show that even in the absence of bargaining power, the formalization of waste pickers occurs due to legal incentives. It is important to ensure the waste pickers cooperatives access to a minimum amount, which requires a level of protection against unfair competition with companies. Regarding the optimal level of environmental policies, even though the formalization time is long, it is still not enough to guarantee the formalization of waste picker cooperatives, which is dependent on their bargaining power. Steel is the material with the largest decrease in acquisition rate of raw material. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro
2017-08-01
In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.
Balancing computation and communication power in power constrained clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piga, Leonardo; Paul, Indrani; Huang, Wei
Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less
Uncertainty propagation from raw data to final results. [ALEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, N.M.
1985-01-01
Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure. Propagation of experimental uncertainties through that reduction process has sometimes been perceived as even more difficult, if not impossible. At the Oak Ridge Electron Linear Accelerator, a computer code ALEX has been developed to assist in the propagation process. The purpose of ALEX is to carefully and correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the experimentalist beyond that whichmore » is needed for the data reduction itself. The theoretical method used in ALEX is described, with emphasis on transmission measurements. Application to the natural iron and natural nickel measurements of D.C. Larson is shown.« less
A Factor Graph Approach to Automated GO Annotation
Spetale, Flavio E.; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum. PMID:26771463
A Factor Graph Approach to Automated GO Annotation.
Spetale, Flavio E; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar
2016-01-01
As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum.
Ocean Sciences meets Big Data Analytics
NASA Astrophysics Data System (ADS)
Hurwitz, B. L.; Choi, I.; Hartman, J.
2016-02-01
Hundreds of researchers worldwide have joined forces in the Tara Oceans Expedition to create an unprecedented planetary-scale dataset comprised of state-of-the-art next generation sequencing, microscopy, and physical/chemical metadata to explore ocean biodiversity. This summer the complete collection of data from the 2009-2013 Tara voyage was released. Yet, despite herculean efforts by the Tara Oceans Consortium to make raw data and computationally derived assemblies and gene catalogs available, most researchers are stymied by the sheer volume of the data. Specifically, the most tantalizing research questions lie in understanding the unifying principles that guide the distribution of organisms across the sea and affect climate and ecosystem function. To use the data in this capacity researchers must download, integrate, and analyze more than 7.2 trillion bases of metagenomic data and associated metadata from viruses, bacteria, archaea and small eukaryotes at their own data centers ( 9 TB of raw data). Accessing large-scale data sets in this way impedes scientists' from replicating and building on prior work. To this end, we are developing a data platform called the Ocean Cloud Commons (OCC) as part of the iMicrobe project. The OCC is built using an algorithm we developed to pre-compute massive comparative metagenomic analyses in a Hadoop big data framework. By maintaining data in a cloud commons researchers have access to scalable computation and real-time analytics to promote the integrated and broad use of planetary-scale datasets, such as Tara.
Translations on Eastern Europe Political, Sociological, and Military Affairs No. 1567
1978-07-21
the industrial development of Jordan by building some industrial capital investments units, for instance electric power plants, cement and ceramics...independence and to the industrialization of these countries and at the same time creates possibilities for expanding imports of economically important raw...construction of important industrial projects, agro-complexes, industrial and agricultural cooperation, the use of new technologies in industry and
Design of a Ku band Instrumentation Synthetic Aperture Radar System
2015-10-14
was 13 MHz, that the noise levels were minimal, and that the variable attenuator was able to raise and lower the power level of the signal. Once all...20 40 60 80 100 120 M ag ni tu de (d B) 0 10 20 30 40 50 60 70 80 Abs-Mean Raw IQ Pulses David Kelly Project: Radar Design WPI MQP Project 34
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.
Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2018-02-01
This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery
Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang
2018-01-01
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
NASA Astrophysics Data System (ADS)
Kolyaie, S.; Yaghooti, M.; Majidi, G.
2011-12-01
This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.
Machine Learning Approaches in Cardiovascular Imaging.
Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan
2017-10-01
Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.
On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.
Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang
2018-04-25
With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.
FATES: a flexible analysis toolkit for the exploration of single-particle mass spectrometer data
NASA Astrophysics Data System (ADS)
Sultana, Camille M.; Cornwell, Gavin C.; Rodriguez, Paul; Prather, Kimberly A.
2017-04-01
Single-particle mass spectrometer (SPMS) analysis of aerosols has become increasingly popular since its invention in the 1990s. Today many iterations of commercial and lab-built SPMSs are in use worldwide. However, supporting analysis toolkits for these powerful instruments are outdated, have limited functionality, or are versions that are not available to the scientific community at large. In an effort to advance this field and allow better communication and collaboration between scientists, we have developed FATES (Flexible Analysis Toolkit for the Exploration of SPMS data), a MATLAB toolkit easily extensible to an array of SPMS designs and data formats. FATES was developed to minimize the computational demands of working with large data sets while still allowing easy maintenance, modification, and utilization by novice programmers. FATES permits scientists to explore, without constraint, complex SPMS data with simple scripts in a language popular for scientific numerical analysis. In addition FATES contains an array of data visualization graphic user interfaces (GUIs) which can aid both novice and expert users in calibration of raw data; exploration of the dependence of mass spectral characteristics on size, time, and peak intensity; and investigations of clustered data sets.
Reefgenomics.Org - a repository for marine genomics data.
Liew, Yi Jin; Aranda, Manuel; Voolstra, Christian R
2016-01-01
Over the last decade, technological advancements have substantially decreased the cost and time of obtaining large amounts of sequencing data. Paired with the exponentially increased computing power, individual labs are now able to sequence genomes or transcriptomes to investigate biological questions of interest. This has led to a significant increase in available sequence data. Although the bulk of data published in articles are stored in public sequence databases, very often, only raw sequencing data are available; miscellaneous data such as assembled transcriptomes, genome annotations etc. are not easily obtainable through the same means. Here, we introduce our website (http://reefgenomics.org) that aims to centralize genomic and transcriptomic data from marine organisms. Besides providing convenient means to download sequences, we provide (where applicable) a genome browser to explore available genomic features, and a BLAST interface to search through the hosted sequences. Through the interface, multiple datasets can be queried simultaneously, allowing for the retrieval of matching sequences from organisms of interest. The minimalistic, no-frills interface reduces visual clutter, making it convenient for end-users to search and explore processed sequence data. DATABASE URL: http://reefgenomics.org. © The Author(s) 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Steinfelds, Eric Victor
The topic of this thesis is the development of the Radioisotope Energy Conversion System (RECS) in a project which is utilizing analytical computational assisted design and some experimental Research in the investigation of fluorescers and effective transducers with the appropriate energy range choice for the conversion of energy. It is desirable to increase the efficiency in electrical power from the raw kinetic power available from the radioactive material within radioisotope power generators. A major step in this direction is the development and use of Radioisotope Energy Conversion Systems to supplement and ideally replace Radioactive Thermal Generators (RTG). It is possible to achieve electrical conversion efficiencies exceeding 25% for RECS power devices compared to only 9 percent efficiency for RTG's. The theoretical basis with existent materials for the potential achievability of efficiencies above 25% is documented within this thesis. The fundamental RECS consists of a radioisotope radiative source (C1), a mediating fluorescent gas (C2) which readily absorbs energy from the beta particles (or alpha's) and subsequently emits blue or UV photons, photovoltaic cells (C3) to convert the blue and UV photons into electrical energy [2], and electrical circuitry (C4). Solid State inspired component (C3), due to its theoretical (and attainable) high efficiency, is a large step ahead of the RTG design concept. The radioisotope flux source produces the beta(-) particles or alpha particles. Geometrically, presently, we prefer to have the ambient fluorescent gas surround the radioisotope flux source. Our fluorescer shall be a gas such as Krypton. Our specifically wide band-gap photovoltaic cells shall have gap energies which are slightly less than that of UV photons produced by the fluorescing gas. Diamond and Aluminum Nitride sample materials are good potential choices for photovoltaic cells, as is explained here in. Out of the material examples discussed, the highest electric power to mass ratio is found to be readily attainable with strontium-90 as the radiative source. Krypton-85 is indisputably the most efficient in RECS devices. In the conclusion in chapter VI, suggestions are given on acceptable ways of containing krypton-85 and providing sufficient shielding on deep space probes destined to use krypton-85 powered 'batteries'.
Satellite Communications Technology Database. Part 2
NASA Technical Reports Server (NTRS)
2001-01-01
The Satellite Communications Technology Database is a compilation of data on state-of-the-art Ka-band technologies current as of January 2000. Most U.S. organizations have not published much of their Ka-band technology data, and so the great majority of this data is drawn largely from Japanese, European, and Canadian publications and Web sites. The data covers antennas, high power amplifiers, low noise amplifiers, MMIC devices, microwave/IF switch matrices, SAW devices, ASIC devices, power and data storage. The data herein is raw, and is often presented simply as the download of a table or figure from a site, showing specified technical characteristics, with no further explanation.
Design and deployment of an elastic network test-bed in IHEP data center based on SDN
NASA Astrophysics Data System (ADS)
Zeng, Shan; Qi, Fazhi; Chen, Gang
2017-10-01
High energy physics experiments produce huge amounts of raw data, while because of the sharing characteristics of the network resources, there is no guarantee of the available bandwidth for each experiment which may cause link congestion problems. On the other side, with the development of cloud computing technologies, IHEP have established a cloud platform based on OpenStack which can ensure the flexibility of the computing and storage resources, and more and more computing applications have been deployed on virtual machines established by OpenStack. However, under the traditional network architecture, network capability can’t be required elastically, which becomes the bottleneck of restricting the flexible application of cloud computing. In order to solve the above problems, we propose an elastic cloud data center network architecture based on SDN, and we also design a high performance controller cluster based on OpenDaylight. In the end, we present our current test results.
Enhancing point of care vigilance using computers.
St Jacques, Paul; Rothman, Brian
2011-09-01
Information technology has the potential to provide a tremendous step forward in perioperative patient safety. Through automated delivery of information through fixed and portable computer resources, clinicians may achieve improved situational awareness of the overall operation of the operating room suite and the state of individual patients in various stages of surgical care. Coupling the raw, but integrated, information with decision support and alerting algorithms enables clinicians to achieve high reliability in documentation compliance and response to care protocols. Future studies and outcomes analysis are needed to quantify the degree of benefit of these new components of perioperative information systems. Copyright © 2011 Elsevier Inc. All rights reserved.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.
1981-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
Modified precision-husky progrind H-3045 for chipping biomass
Dana Mitchell; Fernando Seixas; John Klepac
2008-01-01
A specific size of whole tree chip was needed to co-mill wood chips with coal. The specifications are stringent because chips must be mixed with coal, as opposed to a co-firing process. In co-firing, two raw products are conveyed separately to a boiler. In co-milling, such as at Alabama Power's Plant Gadsden, the chip and coal mix must pass through a series of...
ERIC Educational Resources Information Center
Sternberg, Kathleen J.; Baradaran, Laila P.; Abbott, Craig B.; Lamb, Michael E.; Guterman, Eva
2006-01-01
A mega-analytic study was designed to exploit the power of a large data set combining raw data from multiple studies (n=1870) to examine the effects of type of family violence, age, and gender on children's behavior problems assessed using the Child Behavior Checklist (CBCL). Our findings confirmed that children who experienced multiple forms of…
General overview of an integrated lunar oxygen production/brickmaking system
NASA Technical Reports Server (NTRS)
Altemir, D. A.
1993-01-01
On the moon, various processing systems would compete for the same resources, most notably power, raw materials, and perhaps human attention. Therefore, it may be advantageous for two or more processes to be combined such that the integrated system would require fewer resources than separate systems working independently. The synergistic marriage of two such processes--lunar oxygen production and the manufacture of bricks from sintered lunar regolith--is considered.
5. Photocopied August 1978. FRONT OF A HORRY ROTARY FURNACE, ...
5. Photocopied August 1978. FRONT OF A HORRY ROTARY FURNACE, SHOWING INTERIOR ELECTRODES. THE RAW MATERIALS FOR CALCIUM CARBIDE PRODUCTION--LIMESTONE AND COKE--WERE FED BY HOPPERS PLACED BETWEEN THESE ELECTRODES INTO THE ELECTRIC ARC. THE REMOVABLE PLATES ON THE EXTERNAL CIRCUMSTANCE OF THE HORRY FURNACE ARE SHOWN ON THE FIRST THREE FURNACES. (M) - Michigan Lake Superior Power Company, Portage Street, Sault Ste. Marie, Chippewa County, MI
Strategic Challenges of China-Africa New Partnership
2007-03-30
determined to pursue its national interests that include economic growth, to support a large population and enhance its global relevance. Its rapid...economic growth has increased its hunger for energy, raw materials, market and diplomatic relevance to sustain its power. A key region that China...to eradicate poverty and place their countries …on a part of sustained economic growth...” African leaders 2 evolved the New Partnership for
Argus, Christos K; Gill, Nicholas D; Keogh, Justin W L
2012-10-01
Levels of strength and power have been used to effectively discriminate between different levels of competition; however, there is limited literature in rugby union athletes. To assess the difference in strength and power between levels of competition, 112 rugby union players, including 43 professionals, 19 semiprofessionals, 32 academy level, and 18 high school level athletes, were assessed for bench press and box squat strength, and bench throw, and jump squat power. High school athletes were not assessed for jump squat power. Raw data along with data normalized to body mass with a derived power exponent were log transformed and analyzed. With the exception of box squat and bench press strength between professional and semiprofessional athletes, higher level athletes produced greater absolute and relative strength and power outputs than did lower level athletes (4-51%; small to very large effect sizes). Lower level athletes should strive to attain greater levels of strength and power in an attempt to reach or to be physically prepared for the next level of competition. Furthermore, the ability to produce high levels of power, rather than strength, may be a better determinate of playing ability between professional and semiprofessional athletes.
Zielińska, Ewelina; Baraniak, Barbara; Karaś, Monika
2017-09-02
This study investigated the effect of heat treatment of edible insects on antioxidant and anti-inflammatory activities of peptides obtained by in vitro gastrointestinal digestion and absorption process thereof. The antioxidant potential of edible insect hydrolysates was determined as free radical-scavenging activity, ion chelating activity, and reducing power, whereas the anti-inflammatory activity was expressed as lipoxygenase and cyclooxygenase-2 inhibitory activity. The highest antiradical activity against DPPH • (2,2-diphenyl-1-picrylhydrazyl radical) was noted for a peptide fraction from baked cricket Gryllodes sigillatus hydrolysate (IC 50 value 10.9 µg/mL) and that against ABTS •+ (2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) radical) was the highest for raw mealworm Tenebrio molitor hydrolysate (inhibitory concentration (IC 50 value) 5.3 µg/mL). The peptides obtained from boiled locust Schistocerca gregaria hydrolysate showed the highest Fe 2+ chelation ability (IC 50 value 2.57 µg/mL); furthermore, the highest reducing power was observed for raw G. sigillatus hydrolysate (0.771). The peptide fraction from a protein preparation from the locust S. gregaria exhibited the most significant lipoxygenase and cyclooxygenase-2 inhibitory activity (IC 50 value 3.13 µg/mL and 5.05 µg/mL, respectively).
Beyond policy analysis: the raw politics behind opposition to healthy public policy.
Raphael, Dennis
2015-06-01
Despite evidence that public policy that equitably distributes the prerequisites/social determinants of health (PrH/SDH) is a worthy goal, progress in achieving such healthy public policy (HPP) has been uneven. This has especially been the case in nations where the business sector dominates the making of public policy. In response, various models of the policy process have been developed to create what Kickbusch calls a health political science to correct this situation. In this article I examine an aspect of health political science that is frequently neglected: the raw politics of power and influence. Using Canada as an example, I argue that aspects of HPP related to the distribution of key PrH/SDH are embedded within issues of power, influence, and competing interests such that key sectors of society oppose and are successful in blocking such HPP. By identifying these opponents and understanding why and how they block HPP, these barriers can be surmounted. These efforts to identify opponents of HPP that provide an equitable distribution of the PrH/SDH will be especially necessary where a nation's political economy is dominated by the business and corporate sector. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zielińska, Ewelina; Baraniak, Barbara; Karaś, Monika
2017-01-01
This study investigated the effect of heat treatment of edible insects on antioxidant and anti-inflammatory activities of peptides obtained by in vitro gastrointestinal digestion and absorption process thereof. The antioxidant potential of edible insect hydrolysates was determined as free radical-scavenging activity, ion chelating activity, and reducing power, whereas the anti-inflammatory activity was expressed as lipoxygenase and cyclooxygenase-2 inhibitory activity. The highest antiradical activity against DPPH• (2,2-diphenyl-1-picrylhydrazyl radical) was noted for a peptide fraction from baked cricket Gryllodes sigillatus hydrolysate (IC50 value 10.9 µg/mL) and that against ABTS•+ (2,2′-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) radical) was the highest for raw mealworm Tenebrio molitor hydrolysate (inhibitory concentration (IC50 value) 5.3 µg/mL). The peptides obtained from boiled locust Schistocerca gregaria hydrolysate showed the highest Fe2+ chelation ability (IC50 value 2.57 µg/mL); furthermore, the highest reducing power was observed for raw G. sigillatus hydrolysate (0.771). The peptide fraction from a protein preparation from the locust S. gregaria exhibited the most significant lipoxygenase and cyclooxygenase-2 inhibitory activity (IC50 value 3.13 µg/mL and 5.05 µg/mL, respectively). PMID:28869499
Eskandari, Meghdad; Samavati, Vahid
2015-01-01
A Box-Behnken design (BBD) was used to evaluate the effects of ultrasonic power, extraction time, extraction temperature, and water to raw material ratio on extraction yield of alcohol-insoluble polysaccharide of Althaea rosea leaf (ARLP). Purification was carried out by dialysis method. Chemical analysis of ARLP revealed contained 12.69 ± 0.48% moisture, 79.33 ± 0.51% total sugar, 3.82 ± 0.21% protein, 11.25 ± 0.37% uronic acid and 3.77 ± 0.15% ash. The response surface methodology (RSM) showed that the significant quadratic regression equation with high R(2) (=0.9997) was successfully fitted for extraction yield of ARLP as function of independent variables. The overall optimum region was found to be at the combined level of ultrasonic power 91.85 W, extraction time 29.94 min, extraction temperature 89.78 °C, and the ratio of water to raw material 28.77 (mL/g). At this optimum point, extraction yield of ARLP was 19.47 ± 0.41%. No significant (p>0.05) difference was found between the actual and predicted (19.30 ± 0.075%) values. The results demonstrated that ARLP had strong scavenging activities on DPPH and hydroxyl radicals. Copyright © 2014 Elsevier B.V. All rights reserved.
Rapid Generation of Conceptual and Preliminary Design Aerodynamic Data by a Computer Aided Process
2000-06-01
methodologies, oftenpeculiar requirements such as flexibility and robustness of blended with sensible ’guess-estimated’ values. Due to peculiaremequirments...from the ’raw’ appropriate blending interpolation between the given data aerodynamic data is a process which certainly requires yields generally...like component patches are described by defining the evolution of a conic curve between two opposite boundary curves by means of blending functions
JPRS Report, Science & Technology Europe & Latin America
1997-10-16
It will fall to the developing countries to provide the raw materials, espe- cially those that consume high amounts of energy . In the opinion of...D Explained (Robert Magnaval, Bruno Strigini; BIOFUTUR, May 87) 33 COMPUTERS European Investment Bank Prioritizes European High Tech (Jan...to the task of planning over time the programs and financing for all the activities approved. And on this matter those high financiers, the financial
Constructing and Classifying Email Networks from Raw Forensic Images
2016-09-01
data mining for sequence and pattern mining ; in medical imaging for image segmentation; and in computer vision for object recognition” [28]. 2.3.1...machine learning and data mining suite that is written in Python. It provides a platform for experiment selection, recommendation systems, and...predictivemod- eling. The Orange library is a hierarchically-organized toolbox of data mining components. Data filtering and probability assessment are at the
Neuro-ergonomic Research for Online Assessment of Cognitive Workload
2011-10-01
computer interface (BCI) and medical diagnoses areas. In [65], Kullback - Leibler (KL) divergence was used in the classification 39 of raw EEG signals. It...the features for each EEG channel recorded, and then compared the effectiveness of each feature using a Kruskal-Wallis test . Table 1 lists the...and the KL-distance 5-NN classifier), using different sets of activities. The feature vector and distance measures were tested in pairwise
A Knowledge Engineering Approach to Analysis and Evaluation of Construction Schedules
1990-02-01
software engineering discipline focusing on constructing KBSs. It is an incremental and cyclical process that requires the interaction of a domain expert(s...the U.S. Army Coips of Engineers ; and (3) the project management software developer, represented by Pinnell Engineering , Inc. Since the primary...the programming skills necessary to convert the raw knowledge intn a form a computer can understand. knowledge engineering : The software engineering
Fischer, Curt R.; Ruebel, Oliver; Bowen, Benjamin P.
2015-09-11
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web – even when larger than 50more » GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
Real-time compression of raw computed tomography data: technology, architecture, and benefits
NASA Astrophysics Data System (ADS)
Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert
2009-02-01
Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, CR; Ruebel, O; Bowen, BP
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web - even when larger than 50 GB.more » Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, Curt R.; Ruebel, Oliver; Bowen, Benjamin P.
Mass spectrometry imaging (MSI) is used in an increasing number of biological applications. Typical MSI datasets contain unique, high-resolution mass spectra from tens of thousands of spatial locations, resulting in raw data sizes of tens of gigabytes per sample. In this paper, we review technical progress that is enabling new biological applications and that is driving an increase in the complexity and size of MSI data. Handling such data often requires specialized computational infrastructure, software, and expertise. OpenMSI, our recently described platform, makes it easy to explore and share MSI datasets via the web – even when larger than 50more » GB. Here we describe the integration of OpenMSI with IPython notebooks for transparent, sharable, and replicable MSI research. An advantage of this approach is that users do not have to share raw data along with analyses; instead, data is retrieved via OpenMSI's web API. The IPython notebook interface provides a low-barrier entry point for data manipulation that is accessible for scientists without extensive computational training. Via these notebooks, analyses can be easily shared without requiring any data movement. We provide example notebooks for several common MSI analysis types including data normalization, plotting, clustering, and classification, and image registration.« less
GPS synchronized power system phase angle measurements
NASA Astrophysics Data System (ADS)
Wilson, Robert E.; Sterlina, Patrick S.
1994-09-01
This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.
Ultrasonically enhanced extraction of bioactive principles from Quillaja Saponaria Molina.
Gaete-Garretón, L; Vargas-Hernández, Yolanda; Cares-Pacheco, María G; Sainz, Javier; Alarcón, John
2011-07-01
A study of ultrasonic enhancement in the extraction of bioactive principles from Quillaja Saponaria Molina (Quillay) is presented. The effects influencing the extraction process were studied through a two-level factorial design. The effects considered in the experimental design were: granulometry, extraction time, acoustic Power, raw matter/solvent ratio (concentration) and acoustic impedance. It was found that for aqueous extraction the main factors affecting the ultrasonically-assisted process were: granulometry, raw matter/solvent ratio and extraction time. The extraction ratio was increased by Ultrasonics effect and a reduction in extraction time was verified without any influence in the product quality. In addition the process can be carried out at lower temperatures than the conventional method. As the process developed uses chips from the branches of trees, and not only the bark, this research contributes to make the saponin exploitation process a sustainable industry. Copyright © 2010 Elsevier B.V. All rights reserved.
Raw materials: U.S. grows more vulnerable to third world cartels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wade, N.
1974-01-18
The success of the Arab oil embargo may be encouraging producers of other scarce raw materials to form a cartel against the United States. Pessimistic and optimistic opinions abound. Most third world countries need to sell as much as they can. The political hues of most potential members of a cartel make an arrangement unlikely, such as in the case where Southern Rhodesia, South Africa, Turkey, and the USSR corner the exportable market of chromium, but a coalition is unlikely. The U. S. government has a powerful weapon against cartels in the form of a massive, billion stockpile of strategicmore » minerals. The nonenergy mineral position of the USA seems certain to weaken in the long haul. Technological advances, recycling, substitution, and changing lifestyles to pay for deferred social costs of past consumption and inequities in distribution seem to be in order. (MCW)« less
chipPCR: an R package to pre-process raw data of amplification curves.
Rödiger, Stefan; Burdukiewicz, Michał; Schierack, Peter
2015-09-01
Both the quantitative real-time polymerase chain reaction (qPCR) and quantitative isothermal amplification (qIA) are standard methods for nucleic acid quantification. Numerous real-time read-out technologies have been developed. Despite the continuous interest in amplification-based techniques, there are only few tools for pre-processing of amplification data. However, a transparent tool for precise control of raw data is indispensable in several scenarios, for example, during the development of new instruments. chipPCR is an R: package for the pre-processing and quality analysis of raw data of amplification curves. The package takes advantage of R: 's S4 object model and offers an extensible environment. chipPCR contains tools for raw data exploration: normalization, baselining, imputation of missing values, a powerful wrapper for amplification curve smoothing and a function to detect the start and end of an amplification curve. The capabilities of the software are enhanced by the implementation of algorithms unavailable in R: , such as a 5-point stencil for derivative interpolation. Simulation tools, statistical tests, plots for data quality management, amplification efficiency/quantification cycle calculation, and datasets from qPCR and qIA experiments are part of the package. Core functionalities are integrated in GUIs (web-based and standalone shiny applications), thus streamlining analysis and report generation. http://cran.r-project.org/web/packages/chipPCR. Source code: https://github.com/michbur/chipPCR. stefan.roediger@b-tu.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Two SPSS programs for interpreting multiple regression results.
Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo
2010-02-01
When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.
SPSS and SAS programs for comparing Pearson correlations and OLS regression coefficients.
Weaver, Bruce; Wuensch, Karl L
2013-09-01
Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 - α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.
The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselton, A C
2000-10-19
We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco
High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less
Murugaiyan, Jayaseelan; Eravci, Murat; Weise, Christoph; Roesler, Uwe
2017-06-01
Here, we provide the dataset associated with our research article 'label-free quantitative proteomic analysis of harmless and pathogenic strains of infectious microalgae, Prototheca spp.' (Murugaiyan et al., 2017) [1]. This dataset describes liquid chromatography-mass spectrometry (LC-MS)-based protein identification and quantification of a non-infectious strain, Prototheca zopfii genotype 1 and two strains associated with severe and mild infections, respectively, P. zopfii genotype 2 and Prototheca blaschkeae . Protein identification and label-free quantification was carried out by analysing MS raw data using the MaxQuant-Andromeda software suit. The expressional level differences of the identified proteins among the strains were computed using Perseus software and the results were presented in [1]. This DiB provides the MaxQuant output file and raw data deposited in the PRIDE repository with the dataset identifier PXD005305.
Ioannidis, Vassilios; van Nimwegen, Erik; Stockinger, Heinz
2016-01-01
ISMARA ( ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client. PMID:28232860
Numerical techniques for high-throughput reflectance interference biosensing
NASA Astrophysics Data System (ADS)
Sevenler, Derin; Ünlü, M. Selim
2016-06-01
We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A.; Tauxe, L.; Constable, C.; Jarboe, N. A.
2011-12-01
The Magnetics Information Consortium (MagIC) provides an archive for the wealth of rock- and paleomagnetic data and interpretations from studies on natural and synthetic samples. As with many fields, most peer-reviewed paleo- and rock magnetic publications only include high level results. However, access to the raw data from which these results were derived is critical for compilation studies and when updating results based on new interpretation and analysis methods. MagIC provides a detailed metadata model with places for everything from raw measurements to their interpretations. Prior to MagIC, these raw data were extremely cumbersome to collect because they mostly existed in a lab's proprietary format on investigator's personal computers or undigitized in field notebooks. MagIC has developed a suite of offline and online tools to enable the paleomagnetic, rock magnetic, and affiliated scientific communities to easily contribute both their previously published data and data supporting an article undergoing peer-review, to retrieve well-annotated published interpretations and raw data, and to analyze and visualize large collections of published data online. Here we present the technology we chose (including VBA in Excel spreadsheets, Python libraries, FastCGI JSON webservices, Oracle procedures, and jQuery user interfaces) and how we implemented it in order to serve the scientific community as seamlessly as possible. These tools are now in use in labs worldwide, have helped archive many valuable legacy studies and datasets, and routinely enable new contributions to the MagIC Database (http://earthref.org/MAGIC/).
Olukemi, Bukola Eugenia; Asikhia, Ikuosho Charity; Akindahunsi, Akintunde Afolabi
2018-01-01
The present investigation was designed to evaluate the mineral element bio-accessibility and antioxidant indices of blanched Basella rubra at different phases of simulated in vitro digestion (oral, gastric, and intestinal). The phenolic composition of processed vegetable was determined using high-performance liquid chromatography (HPLC)-diode-array detection method. Mineral composition, total phenolic content (TPC), total flavonoid content (TFC), ferric reducing antioxidant power (FRAP), and total antioxidant activity (TAA) of the in vitro digested blanched and raw vegetable were also determined. HPLC analysis revealed the presence of some phenolic compounds, with higher levels (mg/g) of polyphenols in raw B. rubra (catechin, 1.12; p-coumaric acid, 6.17; caffeic acid, 2.05) compared with the blanched counterpart, with exeption of chlorogenic acid (2.84), that was higher in blanched vegetable. The mineral content (mg/100 g) showed a higher value in enzyme treated raw vegetable compared to their blanched counterparts, with few exceptions. The results revealed a higher level of some of the evaluated minerals at the intestinal phase of digestion (Zn, 6.36/5.31; Mg, 5.29/8.97; Ca, 2,307.69/1,565.38; Na, 5,128/4,128.21) for raw and blanched respectively, with the exception of Fe, K, and P. The results of the antioxidant indices of in vitro digested B. rubra revealed a higher value at the intestinal phase of in vitro digestion, with raw vegetal matter ranking higher (TPC, 553.56 mg/g; TFC, 518.88 mg/g; FRAP, 8.15 mg/g; TAA, 5,043.16 μM Trolox equivalent/g) than the blanched counterpart. The studied vegetable contains important minerals and antioxidant molecules that would be readily available after passing through the gastrointestinal tract and could be harnessed as functional foods. PMID:29662844
Standardization and quality control parameters for Muktā Bhasma (calcined pearl)
Joshi, Namrata; Sharma, Khemchand; Peter, Hema; Dash, Manoj Kumar
2015-01-01
Background: Muktā Bhasma (MB) is a traditional Ayurvedic preparation for cough, breathlessness, and eye disorders and is a powerful cardiac tonic, mood elevator, and known to promote strength, intellect, and semen production. Objectives: The present research work was conducted to generate fingerprint for raw and processed MB for quality assessment and standardization using classical and other techniques. Setting and Design: Three samples of MB were prepared by purification (śodhana) of Muktā (pearl) followed by repeated calcinations (Māraṇa). Resultant product was subjected to organoleptic tests and Ayurvedic tests for quality control such as rekhāpūrṇatā, vāritaratva, and nirdhūmatva. Materials and Methods: For quality control, physicochemical parameters such as loss on drying, total ash value, acid insoluble ash, specific gravity, pH value, and other tests using techniques such as elemental analysis with energy dispersive X-ray analysis (EDAX), Structural study with powder X-ray diffraction, particle size with scanning electron microscopy (SEM) were carried out on raw Muktā, Śodhita Muktā, and triplicate batches of MB. Results: The study showed that the raw material Muktā was calcium carbonate in aragonite form, which on repeated calcinations was converted into a more stable calcite form. SEM studies revealed that in raw and purified materials the particles were found scattered and unevenly arranged in the range of 718.7–214.7 nm while in final product, uniformly arranged, stable, rod-shaped, and rounded particles with more agglomerates were observed in the range of 279.2–79.93 nm. EDAX analysis revealed calcium as a major ingredient in MB (average 46.32%) which increased gradually in the stages of processing (raw 34.11%, Śodhita 37.5%). Conclusion: Quality control parameters have been quantified for fingerprinting of MB prepared using a particular method. PMID:26600667
Infrastructures for Distributed Computing: the case of BESIII
NASA Astrophysics Data System (ADS)
Pellegrino, J.
2018-05-01
The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.
Reconfigurable Computing for Computational Science: A New Focus in High Performance Computing
2006-11-01
in the past decade. Researchers are regularly employing the power of large computing systems and parallel processing to tackle larger and more...complex problems in all of the physical sciences. For the past decade or so, most of this growth in computing power has been “free” with increased...the scientific computing community as a means to continued growth in computing capability. This paper offers a glimpse of the hardware and
NASA Astrophysics Data System (ADS)
Nandigam, V.; Crosby, C. J.; Baru, C.; Arrowsmith, R.
2009-12-01
LIDAR is an excellent example of the new generation of powerful remote sensing data now available to Earth science researchers. Capable of producing digital elevation models (DEMs) more than an order of magnitude higher resolution than those currently available, LIDAR data allows earth scientists to study the processes that contribute to landscape evolution at resolutions not previously possible, yet essential for their appropriate representation. Along with these high-resolution datasets comes an increase in the volume and complexity of data that the user must efficiently manage and process in order for it to be scientifically useful. Although there are expensive commercial LIDAR software applications available, processing and analysis of these datasets are typically computationally inefficient on the conventional hardware and software that is currently available to most of the Earth science community. We have designed and implemented an Internet-based system, the OpenTopography Portal, that provides integrated access to high-resolution LIDAR data as well as web-based tools for processing of these datasets. By using remote data storage and high performance compute resources, the OpenTopography Portal attempts to simplify data access and standard LIDAR processing tasks for the Earth Science community. The OpenTopography Portal allows users to access massive amounts of raw point cloud LIDAR data as well as a suite of DEM generation tools to enable users to generate custom digital elevation models to best fit their science applications. The Cyberinfrastructure software tools for processing the data are freely available via the portal and conveniently integrated with the data selection in a single user-friendly interface. The ability to run these tools on powerful Cyberinfrastructure resources instead of their own labs provides a huge advantage in terms of performance and compute power. The system also encourages users to explore data processing methods and the variations in algorithm parameters since all of the processing is done remotely and numerous jobs can be submitted in sequence. The web-based software also eliminates the need for users to deal with the hassles and costs associated with software installation and licensing while providing adequate disk space for storage and personal job archival capability. Although currently limited to data access and DEM generation tasks, the OpenTopography system is modular in design and can be modified to accommodate new processing tools as they become available. We are currently exploring implementation of higher-level DEM analysis tasks in OpenTopography, since such processing is often computationally intensive and thus lends itself to utilization of cyberinfrastructure. Products derived from OpenTopography processing are available in a variety of formats ranging from simple Google Earth visualizations of LIDAR-derived hillshades to various GIS-compatible grid formats. To serve community users less interested in data processing, OpenTopography also hosts 1 km^2 digital elevation model tiles as well as Google Earth image overlays for a synoptic view of the data.
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars
NASA Astrophysics Data System (ADS)
Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.
2011-12-01
Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development, simulation, and synthesis of two different OBP architectures, we have proven the feasibility and efficacy of an OBP for planetary ice-penetrating radars.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Knuth, F.; Marburg, A.
2016-12-01
A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.
Computational Power of Symmetry-Protected Topological Phases.
Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-07
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Computational Power of Symmetry-Protected Topological Phases
NASA Astrophysics Data System (ADS)
Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert
2017-07-01
We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.
Emulating a million machines to investigate botnets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudish, Donald W.
2010-06-01
Researchers at Sandia National Laboratories in Livermore, California are creating what is in effect a vast digital petridish able to hold one million operating systems at once in an effort to study the behavior of rogue programs known as botnets. Botnets are used extensively by malicious computer hackers to steal computing power fron Internet-connected computers. The hackers harness the stolen resources into a scattered but powerful computer that can be used to send spam, execute phishing, scams or steal digital information. These remote-controlled 'distributed computers' are difficult to observe and track. Botnets may take over parts of tens of thousandsmore » or in some cases even millions of computers, making them among the world's most powerful computers for some applications.« less
Producing microbial polyhydroxyalkanoate (PHA) biopolyesters in a sustainable manner.
Koller, Martin; Maršálek, Lukáš; de Sousa Dias, Miguel Miranda; Braunegg, Gerhart
2017-07-25
Sustainable production of microbial polyhydroxyalkanoate (PHA) biopolyesters on a larger scale has to consider the "four magic e": economic, ethical, environmental, and engineering aspects. Moreover, sustainability of PHA production can be quantified by modern tools of Life Cycle Assessment. Economic issues are to a large extent affected by the applied production mode, downstream processing, and, most of all, by the selection of carbon-rich raw materials as feedstocks for PHA production by safe and naturally occurring wild type microorganisms. In order to comply with ethics, such raw materials should be used which do not interfere with human nutrition and animal feed supply chains, and shall be convertible towards accessible carbon feedstocks by simple methods of upstream processing. Examples were identified in carbon-rich waste materials from various industrial braches closely connected to food production. Therefore, the article shines a light on hetero-, mixo-, and autotrophic PHA production based on various industrial residues from different branches. Emphasis is devoted to the integration of PHA-production based on selected raw materials into the holistic patterns of sustainability; this encompasses the choice of new, powerful microbial production strains, non-hazardous, environmentally benign methods for PHA recovery, and reutilization of waste streams from the PHA production process itself. Copyright © 2016 Elsevier B.V. All rights reserved.
Improved visibility of character conflicts in quasi-median networks with the EMPOP NETWORK software
Zimmermann, Bettina; Röck, Alexander W.; Dür, Arne; Parson, Walther
2014-01-01
Aim To provide a valuable tool for graphical representation of mitochondrial DNA (mtDNA) data that enables visual emphasis on complex substructures within the network to highlight possible ambiguities and errors. Method We applied the new NETWORK graphical user interface, available via EMPOP (European DNA Profiling Group Mitochondrial DNA Population Database; www.empop.org) by means of two mtDNA data sets that were submitted for quality control. Results The quasi-median network torsi of the two data sets resulted in complex reticulations, suggesting ambiguous data. To check the corresponding raw data, accountable nodes and connecting branches of the network could be identified by highlighting induced subgraphs with concurrent dimming of their complements. This is achieved by accentuating the relevant substructures in the network: mouse clicking on a node displays a list of all mtDNA haplotypes included in that node; the selection of a branch specifies the mutation(s) connecting two nodes. It is indicated to evaluate these mutations by means of the raw data. Conclusion Inspection of the raw data confirmed the presence of phantom mutations due to suboptimal electrophoresis conditions and data misinterpretation. The network software proved to be a powerful tool to highlight problematic data and guide quality control of mtDNA data tables. PMID:24778097
Lam, Yu Shan; Okello, Edward J
2015-01-01
The objective of this study was to quantify a number of bioactive compounds and antioxidant activity of the oyster mushroom, Pleurotus. Ostreatus, and characterize the effects of processing, such as blanching, on these outcomes. Dry matter content was 8%. Lovastatin was not detected in this study. β-glucan content of 23.9% and total polyphenol content of 487.12 mg gallic acid equivalent/100 g of dry matter were obtained in raw P. ostreatus. Antioxidant activities as evaluated by 1,1-diphenyl-2-picrylhydrazyl, Trolox equivalent antioxidant capacity, and ferric reducing antioxidant power assays in raw P. ostreatus were 14.46, 16.51, and 11.21 µmol/g, respectively. Blanching did not significantly affect β-glucan content but caused significant decrease in dry matter content, polyphenol content, and antioxidant activities. Mushroom rolls produced from blanched mushrooms and blanching water contained significantly higher amounts of β-glucan, total polyphenol content, and FRAP antioxidant activity compared to blanched mushrooms. In conclusion, P. ostreatus is a good source for β-glucan, dietary polyphenols, and antioxidants. Although the blanching process could affect these properties, re-addition of the blanching water during the production process of mushroom rolls could potentially recover these properties and is therefore recommended.
Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations
NASA Astrophysics Data System (ADS)
Niemeier, Wolfgang; Tengen, Dieter
2017-06-01
In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.
Power Efficient Hardware Architecture of SHA-1 Algorithm for Trusted Mobile Computing
NASA Astrophysics Data System (ADS)
Kim, Mooseop; Ryou, Jaecheol
The Trusted Mobile Platform (TMP) is developed and promoted by the Trusted Computing Group (TCG), which is an industry standard body to enhance the security of the mobile computing environment. The built-in SHA-1 engine in TMP is one of the most important circuit blocks and contributes the performance of the whole platform because it is used as key primitives supporting platform integrity and command authentication. Mobile platforms have very stringent limitations with respect to available power, physical circuit area, and cost. Therefore special architecture and design methods for low power SHA-1 circuit are required. In this paper, we present a novel and efficient hardware architecture of low power SHA-1 design for TMP. Our low power SHA-1 hardware can compute 512-bit data block using less than 7,000 gates and has a power consumption about 1.1 mA on a 0.25μm CMOS process.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1990-01-01
This volume (2 of 4) contains the specification, structured flow charts, and code listing for the protocol. The purpose of an autonomous power system on a spacecraft is to relieve humans from having to continuously monitor and control the generation, storage, and distribution of power in the craft. This implies that algorithms will have been developed to monitor and control the power system. The power system will contain computers on which the algorithms run. There should be one control computer system that makes the high level decisions and sends commands to and receive data from the other distributed computers. This will require a communications network and an efficient protocol by which the computers will communicate. One of the major requirements on the protocol is that it be real time because of the need to control the power elements.
Power systems for production, construction, life support and operations in space
NASA Technical Reports Server (NTRS)
Sovie, Ronald J.
1988-01-01
As one looks to man's future in space it becomes obvious that unprecedented amounts of power are required for the exploration, colonization, and exploitation of space. Activities envisioned include interplanetary travel and LEO to GEO transport using electric propulsion, Earth and lunar observatories, advance space stations, free-flying manufacturing platforms, communications platforms, and eventually evolutionary lunar and Mars bases. These latter bases would start as camps with modest power requirements (kWes) and evolve to large bases as manufacturing, food production, and life support materials are developed from lunar raw materials. These latter activities require very robust power supplies (MWes). The advanced power system technologies being pursued by NASA to fulfill these future needs are described. Technologies discussed will include nuclear, photovoltaic, and solar dynamic space power systems, including energy storage, power conditioning, power transmission, and thermal management. The state-of-the-art and gains to be made by technology advancements will be discussed. Mission requirements for a variety of applications (LEO, GEO, lunar, and Martian) will be treated, and data for power systems ranging from a few kilowatts to megawatt power systems will be represented. In addition the space power technologies being initiated under NASA's new Civilian Space Technology Initiative (CSTI) and Space Leadership Planning Group Activities will be discussed.
Power systems for production, construction, life support, and operations in space
NASA Technical Reports Server (NTRS)
Sovie, Ronald J.
1988-01-01
As one looks to man's future in space it becomes obvious that unprecedented amounts of power are required for the exploration, colonization, and exploitation of space. Activities envisioned include interplanetary travel and LEO to GEO transport using electric propulsion, earth and lunar observatories, advance space stations, free-flying manufacturing platforms, communications platforms, and eventually evolutionary lunar and Mars bases. These latter bases would start as camps with modest power requirements (kWes) and evolve to large bases as manufacturing, food production, and life support materials are developed from lunar raw materials. These latter activities require very robust power supplies (MWes). The advanced power system technologies being pursued by NASA to fulfill these future needs are described. Technologies discussed will include nuclear, photovoltaic, and solar dynamic space power systems, including energy storage, power conditioning, power transmission, and thermal management. The state-of-the-art and gains to be made by technology advancements will be discussed. Mission requirements for a variety of applications (LEO, GEO, lunar, and Martian) will be treated, and data for power systems ranging from a few kilowatts to megawatt power systems will be represented. In addition the space power technologies being initiated under NASA's new Civilian Space Technology Initiative (CSTI) and Space Leadership Planning Group Activities will be discussed.
Takabayashi, Takeshi; Mochizuki, Toshiaki; Otani, Norio; Nishiyama, Kei; Ishimatsu, Shinichi
2014-12-01
The prevalence of anisakiasis is rare in the United States and Europe compared with that in Japan, with few reports of its presentation in the emergency department (ED). This study describes the clinical, hematologic, computed tomographic (CT) characteristics, and treatment in gastric and small intestinal anisakiasis patients in the ED. We retrospectively reviewed the data of 83 consecutive anisakiasis presentations in our ED between 2003 and 2012. Gastric anisakiasis was endoscopically diagnosed with the Anisakis polypide. Small intestinal anisakiasis was diagnosed based on both hematologic (Anisakis antibody) and CT findings. Of the 83 cases, 39 had gastric anisakiasis and 44 had small intestinal anisakiasis based on our diagnostic criteria. Although all patients had abdominal pain, the gastric anisakiasis group developed symptoms significantly earlier (peaking within 6 hours) than the small intestinal anisakiasis group (peaking within 48 hours), and fewer patients with gastric anisakiasis needed admission therapy (5% vs 57%, P<.01). All patients in the gastric and 40 (91%) in the small intestinal anisakiasis group had a history of raw seafood ingestion. Computed tomographic findings revealed edematous wall thickening in all patients, and ascites and phlegmon of the mesenteric fat were more frequently observed in the small intestinal anisakiasis group. In the ED, early and accurate diagnosis of anisakiasis is important to treat and explain to the patient, and diagnosis can be facilitated by a history of raw seafood ingestion, evaluation of the time-to-symptom development, and classic CT findings. Copyright © 2014 Elsevier Inc. All rights reserved.
Strategies for reducing large fMRI data sets for independent component analysis.
Wang, Ze; Wang, Jiongjiong; Calhoun, Vince; Rao, Hengyi; Detre, John A; Childress, Anna R
2006-06-01
In independent component analysis (ICA), principal component analysis (PCA) is generally used to reduce the raw data to a few principal components (PCs) through eigenvector decomposition (EVD) on the data covariance matrix. Although this works for spatial ICA (sICA) on moderately sized fMRI data, it is intractable for temporal ICA (tICA), since typical fMRI data have a high spatial dimension, resulting in an unmanageable data covariance matrix. To solve this problem, two practical data reduction methods are presented in this paper. The first solution is to calculate the PCs of tICA from the PCs of sICA. This approach works well for moderately sized fMRI data; however, it is highly computationally intensive, even intractable, when the number of scans increases. The second solution proposed is to perform PCA decomposition via a cascade recursive least squared (CRLS) network, which provides a uniform data reduction solution for both sICA and tICA. Without the need to calculate the covariance matrix, CRLS extracts PCs directly from the raw data, and the PC extraction can be terminated after computing an arbitrary number of PCs without the need to estimate the whole set of PCs. Moreover, when the whole data set becomes too large to be loaded into the machine memory, CRLS-PCA can save data retrieval time by reading the data once, while the conventional PCA requires numerous data retrieval steps for both covariance matrix calculation and PC extractions. Real fMRI data were used to evaluate the PC extraction precision, computational expense, and memory usage of the presented methods.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. R. Belt
2006-10-01
HPPCALC 2.1 was developed to analyze the raw data from a PNGV Hybrid Pulse Power Characterization (HPPC) test and produce seven standard plots that consist of resistance, power and available energy relationships. The purpose of the HPPC test is to extrapolate the total power capability within predetermined voltage limits of a prototype or full production cell regardless of chemistry with respect to the PNGV goals as outlined in the PNGV Testing Manual, Revision 3. The power capability gives the Electrochemical Energy Storage team the tools to compare different battery sizes and chemistries for possible use in a hybrid electric vehicle.more » The visual basic program HPPCALC 2.1 opens the comma separated value file that is produced from a Maccor, Bitrode or Energy Systems tester. It extracts the necessary information and performs the appropriate calculations. This information is arranged into seven graphs: Resistance versus Depth of Discharge, Power versus Depth of Discharge, Power versus Energy, Power versus Energy, Energy versus Power, Available Energy versus Power, Available Energy versus Power, and Power versus Depth of Discharge. These are the standard plots that are produced for each HPPC test. The primary metric for the HPPC test is the PNGV power, which is the power at which the available energy is equal to 300 Wh. The PNGV power is used to monitor the power degradation of the battery over the course of cycle or calendar life testing.« less
Towards Breaking the Histone Code – Bayesian Graphical Models for Histone Modifications
Mitra, Riten; Müller, Peter; Liang, Shoudan; Xu, Yanxun; Ji, Yuan
2013-01-01
Background Histones are proteins that wrap DNA around in small spherical structures called nucleosomes. Histone modifications (HMs) refer to the post-translational modifications to the histone tails. At a particular genomic locus, each of these HMs can either be present or absent, and the combinatory patterns of the presence or absence of multiple HMs, or the ‘histone codes,’ are believed to co-regulate important biological processes. We aim to use raw data on HM markers at different genomic loci to (1) decode the complex biological network of HMs in a single region and (2) demonstrate how the HM networks differ in different regulatory regions. We suggest that these differences in network attributes form a significant link between histones and genomic functions. Methods and Results We develop a powerful graphical model under Bayesian paradigm. Posterior inference is fully probabilistic, allowing us to compute the probabilities of distinct dependence patterns of the HMs using graphs. Furthermore, our model-based framework allows for easy but important extensions for inference on differential networks under various conditions, such as the different annotations of the genomic locations (e.g., promoters versus insulators). We applied these models to ChIP-Seq data based on CD4+ T lymphocytes. The results confirmed many existing findings and provided a unified tool to generate various promising hypotheses. Differential network analyses revealed new insights on co-regulation of HMs of transcriptional activities in different genomic regions. Conclusions The use of Bayesian graphical models and borrowing strength across different conditions provide high power to infer histone networks and their differences. PMID:23748248
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewitt, Corey A.; Montgomery, David S.; Barbalace, Ryan L.
2014-05-14
By appropriately selecting the carbon nanotube type and n-type dopant for the conduction layers in a multilayered carbon nanotube composite, the total device thermoelectric power output can be increased significantly. The particular materials chosen in this study were raw single walled carbon nanotubes for the p-type layers and polyethylenimine doped single walled carbon nanotubes for the n-type layers. The combination of these two conduction layers leads to a single thermocouple Seebeck coefficient of 96 ± 4 μVK{sup −1}, which is 6.3 times higher than that previously reported. This improved Seebeck coefficient leads to a total power output of 14.7 nW permore » thermocouple at the maximum temperature difference of 50 K, which is 44 times the power output per thermocouple for the previously reported results. Ultimately, these thermoelectric power output improvements help to increase the potential use of these lightweight, flexible, and durable organic multilayered carbon nanotube based thermoelectric modules in low powered electronics applications, where waste heat is available.« less
NASA Technical Reports Server (NTRS)
Sree, Dave
2015-01-01
Near-field acoustic power level analysis of F31A31 open rotor model has been performed to determine its noise characteristics at simulated cruise flight conditions. The non-proprietary parts of the test data obtained from experiments in the 8x6 supersonic wind tunnel were provided by NASA-Glenn Research Center. The tone and broadband components of total noise have been separated from raw test data by using a new data analysis tool. Results in terms of sound pressure levels, acoustic power levels, and their variations with rotor speed, freestream Mach number, and input shaft power, with different blade-pitch setting angles at simulated cruise flight conditions, are presented and discussed. Empirical equations relating models acoustic power level and input shaft power have been developed. The near-field acoustic efficiency of the model at simulated cruise conditions is also determined. It is hoped that the results presented in this work will serve as a database for comparison and improvement of other open rotor blade designs and also for validating open rotor noise prediction codes.
The effect of microwave power on the production of biodiesel from nyamplung
NASA Astrophysics Data System (ADS)
Qadariyah, L.; Mujaddid, F.; Raka; Dhonny, S. B.; Mahfud, M.
2017-12-01
Today, energy needs in Indonesia still rely on fossil energy sources that its availability in the world is increasingly depleted. Therefore, the research for alternative energy of petroleum must be developed, one of them is biodiesel. The use of microwave as energy source of biodiesel production can speed up the reaction time. So the microwave is considered more efficient. Seeds of nyamplung has an oil content of 71.4% (w/w) by weight. With the oil content of the nyamplung seeds has great potential when used as a raw material for biodiesel production. The aim of this research to study the effect of microwave power on the production of biodisel from nyamplung oil. Microwave power affects density, viscosity and yield of the product. The used of alkali catalyst, with higher the power, the lower the density and viscosity of the resulting product, but the resulting yield is 300 W. The power of more than 300 W is the opposite, resulting in the production of biodiesel using the optimum base catalyst at 300 W power.
A vibration powered wireless mote on the Forth Road Bridge
NASA Astrophysics Data System (ADS)
Jia, Yu; Yan, Jize; Feng, Tao; Du, Sijun; Fidler, Paul; Soga, Kenichi; Middleton, Campbell; Seshia, Ashwin A.
2015-12-01
The conventional resonant-approaches to scavenge kinetic energy are typically confined to narrow and single-band frequencies. The vibration energy harvester device reported here combines both direct resonance and parametric resonance in order to enhance the power responsiveness towards more efficient harnessing of real-world ambient vibration. A packaged electromagnetic harvester designed to operate in both of these resonant regimes was tested in situ on the Forth Road Bridge. In the field-site, the harvester, with an operational volume of ∼126 cm3, was capable of recovering in excess of 1 mW average raw AC power from the traffic-induced vibrations in the lateral bracing structures underneath the bridge deck. The harvester was integrated off-board with a power conditioning circuit and a wireless mote. Duty- cycled wireless transmissions from the vibration-powered mote was successfully sustained by the recovered ambient energy. This limited duration field test provides the initial validation for realising vibration-powered wireless structural health monitoring systems in real world infrastructure, where the vibration profile is both broadband and intermittent.
High power density yeast catalyzed microbial fuel cells
NASA Astrophysics Data System (ADS)
Ganguli, Rahul
Microbial fuel cells leverage whole cell biocatalysis to convert the energy stored in energy-rich renewable biomolecules such as sugar, directly to electrical energy at high efficiencies. Advantages of the process include ambient temperature operation, operation in natural streams such as wastewater without the need to clean electrodes, minimal balance-of-plant requirements compared to conventional fuel cells, and environmentally friendly operation. These make the technology very attractive as portable power sources and waste-to-energy converters. The principal problem facing the technology is the low power densities compared to other conventional portable power sources such as batteries and traditional fuel cells. In this work we examined the yeast catalyzed microbial fuel cell and developed methods to increase the power density from such fuel cells. A combination of cyclic voltammetry and optical absorption measurements were used to establish significant adsorption of electron mediators by the microbes. Mediator adsorption was demonstrated to be an important limitation in achieving high power densities in yeast-catalyzed microbial fuel cells. Specifically, the power densities are low for the length of time mediator adsorption continues to occur. Once the mediator adsorption stops, the power densities increase. Rotating disk chronoamperometry was used to extract reaction rate information, and a simple kinetic expression was developed for the current observed in the anodic half-cell. Since the rate expression showed that the current was directly related to microbe concentration close to the electrode, methods to increase cell mass attached to the anode was investigated. Electrically biased electrodes were demonstrated to develop biofilm-like layers of the Baker's yeast with a high concentration of cells directly connected to the electrode. The increased cell mass did increase the power density 2 times compared to a non biofilm fuel cell, but the power density increase was shown to quickly saturate with cell mass attached on the electrode. Based on recent modelling data that suggested that the electrode currents might be limited by the poor electrical conductivity of the anode, the power density versus electrical conductivity of a yeast-immobilized anode was investigated. Introduction of high aspect ratio carbon fiber filaments to the immobilization matrix increased the electrical conductivity of the anode. Although a higher electrical conductivity clearly led to an increase in power densities, it was shown that the principal limitation to power density increase was coming from proton transfer limitations in the immobilized anode. Partial overcoming of the gradients lead a power density of ca. 250 microW cm-2, which is the highest reported for yeast powered MFCs. A yeast-catalyzed microbial fuel cell was investigated as a power source for low power sensors using raw tree sap. It was shown that yeast can efficiently utilize the sucrose present in the raw tree sap to produce electricity when excess salt is added to the medium. Therefore the salinity of a potential energy source is an important consideration when MFCs are being considered for energy harvesting from natural sources.
Space-Shuttle Emulator Software
NASA Technical Reports Server (NTRS)
Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram;
2007-01-01
A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.
Teach Graphic Design Basics with PowerPoint
ERIC Educational Resources Information Center
Lazaros, Edward J.; Spotts, Thomas H.
2007-01-01
While PowerPoint is generally regarded as simply software for creating slide presentations, it includes often overlooked--but powerful--drawing tools. Because it is part of the Microsoft Office package, PowerPoint comes preloaded on many computers and thus is already available in many classrooms. Since most computers are not preloaded with good…
NASA Technical Reports Server (NTRS)
Fegley, K. A.; Hayden, J. H.; Rehmann, D. W.
1974-01-01
The feasibility of formulating a methodology for the modeling and analysis of aerospace electrical power processing systems is investigated. It is shown that a digital computer may be used in an interactive mode for the design, modeling, analysis, and comparison of power processing systems.
NASA Technical Reports Server (NTRS)
1975-01-01
The persistence of the current period of inflation and its apparent resistance to traditional fiscal and monetary policies implies a circular behavior that becomes increasingly impervious to ameliorative action. This behavior is attributed to: A concurrent industrial boom among industrialized nations; price and production policies; worldwide reductions in agricultural products; international shortages of natural resources and raw materials; and rise in multinational firms and merchant banking.
The A, C, G, and T of Genome Assembly.
Wajid, Bilal; Sohail, Muhammad U; Ekti, Ali R; Serpedin, Erchin
2016-01-01
Genome assembly in its two decades of history has produced significant research, in terms of both biotechnology and computational biology. This contribution delineates sequencing platforms and their characteristics, examines key steps involved in filtering and processing raw data, explains assembly frameworks, and discusses quality statistics for the assessment of the assembled sequence. Furthermore, the paper explores recent Ubuntu-based software environments oriented towards genome assembly as well as some avenues for future research.
ERIC Educational Resources Information Center
Goldfine, Alan H., Ed.
This workshop investigated how managers can evaluate, select, and effectively use information resource management (IRM) tools, especially data dictionary systems (DDS). An executive summary, which provides a definition of IRM as developed by workshop participants, precedes the keynote address, "Data: The Raw Material of a Paper Factory,"…
Multi-Criteria selection of technology for processing ore raw materials
NASA Astrophysics Data System (ADS)
Gorbatova, E. A.; Emelianenko, E. A.; Zaretckii, M. V.
2017-10-01
The development of Computer-Aided Process Planning (CAPP) for the Ore Beneficiation process is considered. The set of parameters to define the quality of the Ore Beneficiation process is identified. The ontological model of CAPP for the Ore Beneficiation process is described. The hybrid choice method of the most appropriate variant of the Ore Beneficiation process based on the Logical Conclusion Rules and the Fuzzy Multi-Criteria Decision Making (MCDM) approach is proposed.
Proceedings of the 1997 Space Control Conference, Volume 2
1997-03-27
of attitude information are used in our orbit determination. The attitude history buffer on MSX holds quaternions and time tags at 100-second...being able to use the attitude quaternions , DYNAMO can also compute the park-mode attitude of MSX if, for example, some of the raw on-board attitude ... used to aid the target discrimination process. THE MSX SPACECRAFT The SBV sensor was launched on the BMDO supported MSX spacecraft on 24 April
NASA Astrophysics Data System (ADS)
Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian
The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.
The future of interaction on the Internet
NASA Technical Reports Server (NTRS)
Maybury, Mark T.
1994-01-01
The federated collection of heterogeneous computers we collectively term the Internet has been the force behind revolutionary change, both in terms of actual practice and imagination of what might be. The raw numbers are staggering: IP registrations are now ten times that of only five years ago; and between 1992 - 1993 the amount of information being transferred tripled. Changes in three key areas: knowledge discovery; intelligent information access; and collaboration on the Internet are discussed.
SEC sensor parametric test and evaluation system
NASA Technical Reports Server (NTRS)
1978-01-01
This system provides the necessary automated hardware required to carry out, in conjunction with the existing 70 mm SEC television camera, the sensor evaluation tests which are described in detail. The Parametric Test Set (PTS) was completed and is used in a semiautomatic data acquisition and control mode to test the development of the 70 mm SEC sensor, WX 32193. Data analysis of raw data is performed on the Princeton IBM 360-91 computer.
Statistical Approach To Extraction Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1992-01-01
Improved statistical method of extraction of textural features in synthetic-aperture-radar (SAR) images takes account of effects of scheme used to sample raw SAR data, system noise, resolution of radar equipment, and speckle. Treatment of speckle incorporated into overall statistical treatment of speckle, system noise, and natural variations in texture. One computes speckle auto-correlation function from system transfer function that expresses effect of radar aperature and incorporates range and azimuth resolutions.
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
A 64-channel ultra-low power system-on-chip for local field and action potentials recording
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Alberto; Delgado-Restituto, Manuel; Darie, Angela; Soto-Sánchez, Cristina; Fernández-Jover, Eduardo; Rodríguez-Vázquez, Ángel
2015-06-01
This paper reports an integrated 64-channel neural recording sensor. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration mechanism which configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by an embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330μW.
3D-RTK Capability of Single Gnss Receivers
NASA Astrophysics Data System (ADS)
Stempfhuber, W.
2013-08-01
Small, aerial objects are now being utilised in many areas of civil object capture and monitoring. As a rule, the standard application of a simple GPS receiver with code solutions serves the 3D-positioning of the trajectories or recording positions. Without GPS correction information, these can be calculated at an accuracy of 10-20 metres. Corrected code solutions (DGPS) generally lie in the metre range. A precise 3D-positioning of the UAV (unmanned aerial vehicle) trajectories in the centimetre range provides significant improvements. In addition, the recording time of each sensor can be synchronized with the exact time stamp of the GNSS low-cost system. In recent years, increasing works on positioning from L1 GPS raw data have been published. Along with this, the carrier phase measurements with the established evaluation algorithms are analysed in the post processing method to centimetre-exact positions or to high-precision 3D trajectories [e.g. Schwieger and Gläser, 2005 or Korth and Hofmann 20011]. The use of reference information from local reference stations or a reference network serves the purpose of carrier phase ambiguity resolution. Furthermore, there are many activities worldwide in the area of PPP techniques (Precise Point Positioning). However, dual frequency receivers are primarily used in this instance. Moreover, very long initialisation times must be scheduled for this. A research project on the subject of low-cost RTK GNSS was developed for real-time applications at the Beuth Hochschule für Technik Berlin University of Applied Sciences [Stempfhuber 2012]. The overall system developed for the purpose of real-time applications with centimetre accuracy is modularly constructed and can be used for various applications (http://prof.beuthhochschule.de/stempfhuber/seite-publikation/). With hardware costing a few hundred Euro and a total weight of 500-800 g (including the battery), this system is ideally suited for UAV applications. In addition, the GNSS data processed with the RTK method can be provided in standardised NMEA format. Through the reduced shadowing effects of the aerial objects, GNSS external factors such as multipath cause few problems. With L1 carrier phase analysis, the baseline computation must nevertheless remain limited at the range of a few kilometres. With distances of more than 5 kilometres between the reference station and the rover station position, mistakes arise in the decimetre area. The overall modular system consists of a low-cost, single-frequency receiver (e.g. uBlox LEA4T or 6T receiver), a L1 antenna (e.g. the Trimble Bullet III), a developed data logger including an integrated WLAN communication module for storage and securing of the raw data as well as a power supply. Optimisation of the L1 antenna has shown that, in this instance, many problems relating to signal reception can be reduced. A calibration of the choke-ring adaptors for various antenna calibration facilities results in good and homogeneous antenna parameters. In this situation, the real-time algorithm from the Open Source project RTKLib [Takasu, 2010] generally runs on a small computer at the reference station. In this case, the data transfer from the L1 receiver to the PC is realisable through a serial cable. The rover station can transfer the raw data to the computing algorithm over a WLAN network or through a data radio. Of course, this computational algorithm can also be adapted to an integrated computing module for L1 carrier phase resolutions. The average time to first fix (TTFF) amounts to a few minutes depending on the satellite constellation. Different test series in movement simulators and in moving objects have shown that a stable, fixed solution is achieved with a normal satellite constellation. A test series with a Microdrones quadrocopter could also be conducted. In comparison of the RTK positions with a geodetic dual frequency receiver, differences are in millimetre ranges. In addition, reference systems (based on total stations) are present for the precise examination of the kinematically captured positioning [Eisenbeiss et al. 2009].
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.
1984-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The proposed dc model for bipolar junction power switching transistors is based on measurements which may be made with standard laboratory equipment. Those nonlinearities which are of importance to power electronics design are emphasized. Measurements procedures are discussed in detail. A model formulation adapted for use with a computer program is presented, and a comparison between actual and computer-generated results is made.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Development and Evaluation of the Diagnostic Power for a Computer-Based Two-Tier Assessment
ERIC Educational Resources Information Center
Lin, Jing-Wen
2016-01-01
This study adopted a quasi-experimental design with follow-up interview to develop a computer-based two-tier assessment (CBA) regarding the science topic of electric circuits and to evaluate the diagnostic power of the assessment. Three assessment formats (i.e., paper-and-pencil, static computer-based, and dynamic computer-based tests) using…
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
Huang, Miao; Xiong, Chiyi; Lu, Wei; Zhang, Rui; Zhou, Min; Huang, Qian; Weinberg, Jeffrey; Li, Chun
2014-02-01
In glioblastoma, EphB4 receptors, a member of the largest family of receptor tyrosine kinases, are overexpressed in both tumor cells and angiogenic blood vessels. The purpose of this study was to examine whether the EphB4-binding peptide TNYL-RAW labeled with both (64)Cu and near-infrared fluorescence dye Cy5.5 could be used as a molecular imaging agent for dual-modality positron emission tomography/computed tomography [PET/CT] and optical imaging of human glioblastoma in orthotopic brain tumor models. TNYL-RAW was conjugated to Cy5.5 and the radiometal chelator 1,4,7,10-tetraazadodecane-N,N',N″,N‴-tetraacetic acid. The conjugate was then labeled with (64)Cu for in vitro binding and in vivo dual μPET/CT and optical imaging studies in nude mice implanted with EphB4-expressing U251 and EphB4-negative U87 human glioblastoma cells. Tumors and brains were removed at the end of the imaging sessions for immunohistochemical staining and fluorescence microscopic examinations. μPET/CT and near-infrared optical imaging clearly showed specific uptake of the dual-labeled TNYL-RAW peptide in both U251 and U87 tumors in the brains of the nude mice after intravenous injection of the peptide. In U251 tumors, the Cy5.5-labeled peptide colocalized with both tumor blood vessels and tumor cells; in U87 tumors, the tracer colocalized only with tumor blood vessels, not with tumor cells. Dual-labeled EphB4-specific peptide could be used as a noninvasive molecular imaging agent for PET/CT and optical imaging of glioblastoma owing to its ability to bind to both EphB4-expressing angiogenic blood vessels and EphB4-expressing tumor cells.
Huang, Miao; Xiong, Chiyi; Lu, Wei; Zhang, Rui; Zhou, Min; Huang, Qian; Weinberg, Jeffrey; Li, Chun
2013-01-01
Purpose In glioblastoma, EphB4 receptors, a member of the largest family of receptor tyrosine kinases, are overexpressed in both tumor cells and angiogenic blood vessels. The purpose of this study was to examine whether the EphB4-binding peptide TNYL-RAW labeled with both 64Cu and near-infrared fluorescence dye Cy5.5 could be used as a molecular imaging agent for dual-modality positron emission tomography/computed tomography [PET/CT] and optical imaging of human glioblastoma in orthotopic brain tumor models. Materials and Methods TNYL-RAW was conjugated to Cy5.5 and the radiometal chelator 1,4,7,10-tetraazadodecane-N,N′,N″,N‴ -tetraacetic acid. The conjugate was then labeled with 64Cu for in vitro binding and in vivo dual μPET/CT and optical imaging studies in nude mice implanted with EphB4-expressing U251 and EphB4-negative U87 human glioblastoma cells. Tumors and brains were removed at the end of the imaging sessions for immunohistochemical staining and fluorescence microscopic examinations. Results μPET/CT and near-infrared optical imaging clearly showed specific uptake of the dual-labeled TNYL-RAW peptide in both U251 and U87 tumors in the brains of the nude mice after intravenous injection of the peptide. In U251 tumors, the Cy5.5-labeled peptide colocalized with both tumor blood vessels and tumor cells; in U87 tumors, the tracer colocalized only with tumor blood vessels, not with tumor cells. Conclusions Dual-labeled EphB4-specific peptide could be used as a noninvasive molecular imaging agent for PET/CT and optical imaging of glioblastoma owing to its ability to bind to both EphB4-expressing angiogenic blood vessels and EphB4-expressing tumor cells. PMID:23918654
Wu, Peng; Liao, Zhenkai; Luo, Tingyu; Chen, Liding; Chen, Xiao Dong
2017-06-01
Previously, a dynamic in vitro rat stomach system (DIVRS-I) designed based on the principles of morphological bionics was reported. The digestibilities of casein powder and raw rice particles were found to be lower than those in vivo due to perhaps the less efficient compression performance and lower mixing efficiency. In this study, a 2nd version of the rat stomach system (DIVRS-II) with an additional rolling extrusion type motility on the wall of the soft-elastic silicone rat stomach model is introduced. The DIVRS-II was then tested by comparing the digestive behaviors of the casein powder suspensions and raw rice particles with those previously published data obtained from the in vivo test on living rats, the DIVRS-I, and the stirred tank reactor at its optimum stirring speed. The results have indicated that although the digestibilities of the casein powder and raw rice particles in the DIVRS-II are still lower than the average results obtained from in vivo, they are significantly improved by about 50% and 32% at the end of digestion compared with that in the DIVRS-I, respectively. The work has demonstrated that the powerful rolling extrusion is highly effective and has contributed to the significant improvement of digestibility as shown here. In addition, the digestibility presented in the DIVRS-II was found already higher than that tested in the STR at its optimum speed, indicating the high potential of the soft-elastic stomach under the influence of the "rolling and squeezing" for more realistic investigation of food digestion. © 2017 Institute of Food Technologists®.
Compressive sensing based wireless sensor for structural health monitoring
NASA Astrophysics Data System (ADS)
Bao, Yuequan; Zou, Zilong; Li, Hui
2014-03-01
Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
Low bandwidth eye tracker for scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia
2012-02-01
The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.
Rheological and fractal characteristics of unconditioned and conditioned water treatment residuals.
Dong, Y J; Wang, Y L; Feng, J
2011-07-01
The rheological and fractal characteristics of raw (unconditioned) and conditioned water treatment residuals (WTRs) were investigated in this study. Variations in morphology, size, and image fractal dimensions of the flocs/aggregates in these WTR systems with increasing polymer doses were analyzed. The results showed that when the raw WTRs were conditioned with the polymer CZ8688, the optimum polymer dosage was observed at 24 kg/ton dry sludge. The average diameter of irregularly shaped flocs/aggregates in the WTR suspensions increased from 42.54 μm to several hundred micrometers with increasing polymer doses. Furthermore, the aggregates in the conditioned WTR system displayed boundary/surface and mass fractals. At the optimum polymer dosage, the aggregates formed had a volumetric average diameter of about 820.7 μm, with a one-dimensional fractal dimension of 1.01 and a mass fractal dimension of 2.74 on the basis of the image analysis. Rheological tests indicated that the conditioned WTRs at the optimum polymer dosage showed higher levels of shear-thinning behavior than the raw WTRs. Variations in the limiting viscosity (η(∞)) of conditioned WTRs with sludge content could be described by a linear equation, which were different from the often-observed empirical exponential relationship for most municipal sludge. With increasing temperature, the η(∞) of the raw WTRs decreased more rapidly than that of the raw WTRs. Good fitting results for the relationships between lgη(∞)∼T using the Arrhenius equation indicate that the WTRs had a much higher activation energy for viscosity of about 17.86-26.91 J/mol compared with that of anaerobic granular sludge (2.51 J/mol) (Mu and Yu, 2006). In addition, the Bingham plastic model adequately described the rheological behavior of the conditioned WTRs, whereas the rheology of the raw WTRs fit the Herschel-Bulkley model well at only certain sludge contents. Considering the good power-law relationships between the limiting viscosity and sludge content of the conditioned WTRs, their mass fractal dimensions were calculated through the models proposed by Shih et al. (1990), which were 2.48 for these conditioned WTR aggregates. The results demonstrate that conditioned WTRs behave like weak-link flocs/aggregates. Copyright © 2011 Elsevier Ltd. All rights reserved.
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
NASA Astrophysics Data System (ADS)
Stockton, Gregory R.
2011-05-01
Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.
Computer memory power control for the Galileo spacecraft
NASA Technical Reports Server (NTRS)
Detwiler, R. C.
1983-01-01
The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than kilns; in-line kiln/raw mills; clinker coolers; new and reconstructed raw material dryers; and raw...; in-line kiln/raw mills; clinker coolers; new and reconstructed raw material dryers; and raw and finish mills. The owner or operator of each new or existing raw material, clinker, or finished product...
Grid Computing in K-12 Schools. Soapbox Digest. Volume 3, Number 2, Fall 2004
ERIC Educational Resources Information Center
AEL, 2004
2004-01-01
Grid computing allows large groups of computers (either in a lab, or remote and connected only by the Internet) to extend extra processing power to each individual computer to work on components of a complex request. Grid middleware, recognizing priorities set by systems administrators, allows the grid to identify and use this power without…